Oct 10 09:04:48 localhost kernel: Linux version 5.14.0-621.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025
Oct 10 09:04:48 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct 10 09:04:48 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 10 09:04:48 localhost kernel: BIOS-provided physical RAM map:
Oct 10 09:04:48 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct 10 09:04:48 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct 10 09:04:48 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct 10 09:04:48 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct 10 09:04:48 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct 10 09:04:48 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct 10 09:04:48 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct 10 09:04:48 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct 10 09:04:48 localhost kernel: NX (Execute Disable) protection: active
Oct 10 09:04:48 localhost kernel: APIC: Static calls initialized
Oct 10 09:04:48 localhost kernel: SMBIOS 2.8 present.
Oct 10 09:04:48 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct 10 09:04:48 localhost kernel: Hypervisor detected: KVM
Oct 10 09:04:48 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct 10 09:04:48 localhost kernel: kvm-clock: using sched offset of 4352091932 cycles
Oct 10 09:04:48 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct 10 09:04:48 localhost kernel: tsc: Detected 2799.998 MHz processor
Oct 10 09:04:48 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Oct 10 09:04:48 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Oct 10 09:04:48 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct 10 09:04:48 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct 10 09:04:48 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct 10 09:04:48 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct 10 09:04:48 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct 10 09:04:48 localhost kernel: Using GB pages for direct mapping
Oct 10 09:04:48 localhost kernel: RAMDISK: [mem 0x2d858000-0x32c23fff]
Oct 10 09:04:48 localhost kernel: ACPI: Early table checksum verification disabled
Oct 10 09:04:48 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct 10 09:04:48 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 10 09:04:48 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 10 09:04:48 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 10 09:04:48 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct 10 09:04:48 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 10 09:04:48 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 10 09:04:48 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Oct 10 09:04:48 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Oct 10 09:04:48 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct 10 09:04:48 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Oct 10 09:04:48 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Oct 10 09:04:48 localhost kernel: No NUMA configuration found
Oct 10 09:04:48 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct 10 09:04:48 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Oct 10 09:04:48 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct 10 09:04:48 localhost kernel: Zone ranges:
Oct 10 09:04:48 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct 10 09:04:48 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct 10 09:04:48 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct 10 09:04:48 localhost kernel:   Device   empty
Oct 10 09:04:48 localhost kernel: Movable zone start for each node
Oct 10 09:04:48 localhost kernel: Early memory node ranges
Oct 10 09:04:48 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct 10 09:04:48 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct 10 09:04:48 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct 10 09:04:48 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct 10 09:04:48 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct 10 09:04:48 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct 10 09:04:48 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct 10 09:04:48 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Oct 10 09:04:48 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct 10 09:04:48 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct 10 09:04:48 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct 10 09:04:48 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct 10 09:04:48 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct 10 09:04:48 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct 10 09:04:48 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct 10 09:04:48 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct 10 09:04:48 localhost kernel: TSC deadline timer available
Oct 10 09:04:48 localhost kernel: CPU topo: Max. logical packages:   8
Oct 10 09:04:48 localhost kernel: CPU topo: Max. logical dies:       8
Oct 10 09:04:48 localhost kernel: CPU topo: Max. dies per package:   1
Oct 10 09:04:48 localhost kernel: CPU topo: Max. threads per core:   1
Oct 10 09:04:48 localhost kernel: CPU topo: Num. cores per package:     1
Oct 10 09:04:48 localhost kernel: CPU topo: Num. threads per package:   1
Oct 10 09:04:48 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct 10 09:04:48 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct 10 09:04:48 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct 10 09:04:48 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct 10 09:04:48 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct 10 09:04:48 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct 10 09:04:48 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct 10 09:04:48 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct 10 09:04:48 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct 10 09:04:48 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct 10 09:04:48 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct 10 09:04:48 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct 10 09:04:48 localhost kernel: Booting paravirtualized kernel on KVM
Oct 10 09:04:48 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct 10 09:04:48 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct 10 09:04:48 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct 10 09:04:48 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Oct 10 09:04:48 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Oct 10 09:04:48 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Oct 10 09:04:48 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 10 09:04:48 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64", will be passed to user space.
Oct 10 09:04:48 localhost kernel: random: crng init done
Oct 10 09:04:48 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct 10 09:04:48 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct 10 09:04:48 localhost kernel: Fallback order for Node 0: 0 
Oct 10 09:04:48 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct 10 09:04:48 localhost kernel: Policy zone: Normal
Oct 10 09:04:48 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct 10 09:04:48 localhost kernel: software IO TLB: area num 8.
Oct 10 09:04:48 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct 10 09:04:48 localhost kernel: ftrace: allocating 49162 entries in 193 pages
Oct 10 09:04:48 localhost kernel: ftrace: allocated 193 pages with 3 groups
Oct 10 09:04:48 localhost kernel: Dynamic Preempt: voluntary
Oct 10 09:04:48 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Oct 10 09:04:48 localhost kernel: rcu:         RCU event tracing is enabled.
Oct 10 09:04:48 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct 10 09:04:48 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Oct 10 09:04:48 localhost kernel:         Rude variant of Tasks RCU enabled.
Oct 10 09:04:48 localhost kernel:         Tracing variant of Tasks RCU enabled.
Oct 10 09:04:48 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct 10 09:04:48 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct 10 09:04:48 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 10 09:04:48 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 10 09:04:48 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct 10 09:04:48 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct 10 09:04:48 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct 10 09:04:48 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct 10 09:04:48 localhost kernel: Console: colour VGA+ 80x25
Oct 10 09:04:48 localhost kernel: printk: console [ttyS0] enabled
Oct 10 09:04:48 localhost kernel: ACPI: Core revision 20230331
Oct 10 09:04:48 localhost kernel: APIC: Switch to symmetric I/O mode setup
Oct 10 09:04:48 localhost kernel: x2apic enabled
Oct 10 09:04:48 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Oct 10 09:04:48 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct 10 09:04:48 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Oct 10 09:04:48 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct 10 09:04:48 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct 10 09:04:48 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct 10 09:04:48 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct 10 09:04:48 localhost kernel: Spectre V2 : Mitigation: Retpolines
Oct 10 09:04:48 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct 10 09:04:48 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct 10 09:04:48 localhost kernel: RETBleed: Mitigation: untrained return thunk
Oct 10 09:04:48 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct 10 09:04:48 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct 10 09:04:48 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct 10 09:04:48 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct 10 09:04:48 localhost kernel: x86/bugs: return thunk changed
Oct 10 09:04:48 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct 10 09:04:48 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct 10 09:04:48 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct 10 09:04:48 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct 10 09:04:48 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct 10 09:04:48 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct 10 09:04:48 localhost kernel: Freeing SMP alternatives memory: 40K
Oct 10 09:04:48 localhost kernel: pid_max: default: 32768 minimum: 301
Oct 10 09:04:48 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct 10 09:04:48 localhost kernel: landlock: Up and running.
Oct 10 09:04:48 localhost kernel: Yama: becoming mindful.
Oct 10 09:04:48 localhost kernel: SELinux:  Initializing.
Oct 10 09:04:48 localhost kernel: LSM support for eBPF active
Oct 10 09:04:48 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 10 09:04:48 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 10 09:04:48 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct 10 09:04:48 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct 10 09:04:48 localhost kernel: ... version:                0
Oct 10 09:04:48 localhost kernel: ... bit width:              48
Oct 10 09:04:48 localhost kernel: ... generic registers:      6
Oct 10 09:04:48 localhost kernel: ... value mask:             0000ffffffffffff
Oct 10 09:04:48 localhost kernel: ... max period:             00007fffffffffff
Oct 10 09:04:48 localhost kernel: ... fixed-purpose events:   0
Oct 10 09:04:48 localhost kernel: ... event mask:             000000000000003f
Oct 10 09:04:48 localhost kernel: signal: max sigframe size: 1776
Oct 10 09:04:48 localhost kernel: rcu: Hierarchical SRCU implementation.
Oct 10 09:04:48 localhost kernel: rcu:         Max phase no-delay instances is 400.
Oct 10 09:04:48 localhost kernel: smp: Bringing up secondary CPUs ...
Oct 10 09:04:48 localhost kernel: smpboot: x86: Booting SMP configuration:
Oct 10 09:04:48 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct 10 09:04:48 localhost kernel: smp: Brought up 1 node, 8 CPUs
Oct 10 09:04:48 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Oct 10 09:04:48 localhost kernel: node 0 deferred pages initialised in 9ms
Oct 10 09:04:48 localhost kernel: Memory: 7765872K/8388068K available (16384K kernel code, 5784K rwdata, 13864K rodata, 4188K init, 7196K bss, 616208K reserved, 0K cma-reserved)
Oct 10 09:04:48 localhost kernel: devtmpfs: initialized
Oct 10 09:04:48 localhost kernel: x86/mm: Memory block size: 128MB
Oct 10 09:04:48 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct 10 09:04:48 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct 10 09:04:48 localhost kernel: pinctrl core: initialized pinctrl subsystem
Oct 10 09:04:48 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct 10 09:04:48 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct 10 09:04:48 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct 10 09:04:48 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct 10 09:04:48 localhost kernel: audit: initializing netlink subsys (disabled)
Oct 10 09:04:48 localhost kernel: audit: type=2000 audit(1760087086.473:1): state=initialized audit_enabled=0 res=1
Oct 10 09:04:48 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct 10 09:04:48 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct 10 09:04:48 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Oct 10 09:04:48 localhost kernel: cpuidle: using governor menu
Oct 10 09:04:48 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct 10 09:04:48 localhost kernel: PCI: Using configuration type 1 for base access
Oct 10 09:04:48 localhost kernel: PCI: Using configuration type 1 for extended access
Oct 10 09:04:48 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct 10 09:04:48 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct 10 09:04:48 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct 10 09:04:48 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct 10 09:04:48 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct 10 09:04:48 localhost kernel: Demotion targets for Node 0: null
Oct 10 09:04:48 localhost kernel: cryptd: max_cpu_qlen set to 1000
Oct 10 09:04:48 localhost kernel: ACPI: Added _OSI(Module Device)
Oct 10 09:04:48 localhost kernel: ACPI: Added _OSI(Processor Device)
Oct 10 09:04:48 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct 10 09:04:48 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct 10 09:04:48 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct 10 09:04:48 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct 10 09:04:48 localhost kernel: ACPI: Interpreter enabled
Oct 10 09:04:48 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct 10 09:04:48 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Oct 10 09:04:48 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct 10 09:04:48 localhost kernel: PCI: Using E820 reservations for host bridge windows
Oct 10 09:04:48 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct 10 09:04:48 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct 10 09:04:48 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct 10 09:04:48 localhost kernel: acpiphp: Slot [3] registered
Oct 10 09:04:48 localhost kernel: acpiphp: Slot [4] registered
Oct 10 09:04:48 localhost kernel: acpiphp: Slot [5] registered
Oct 10 09:04:48 localhost kernel: acpiphp: Slot [6] registered
Oct 10 09:04:48 localhost kernel: acpiphp: Slot [7] registered
Oct 10 09:04:48 localhost kernel: acpiphp: Slot [8] registered
Oct 10 09:04:48 localhost kernel: acpiphp: Slot [9] registered
Oct 10 09:04:48 localhost kernel: acpiphp: Slot [10] registered
Oct 10 09:04:48 localhost kernel: acpiphp: Slot [11] registered
Oct 10 09:04:48 localhost kernel: acpiphp: Slot [12] registered
Oct 10 09:04:48 localhost kernel: acpiphp: Slot [13] registered
Oct 10 09:04:48 localhost kernel: acpiphp: Slot [14] registered
Oct 10 09:04:48 localhost kernel: acpiphp: Slot [15] registered
Oct 10 09:04:48 localhost kernel: acpiphp: Slot [16] registered
Oct 10 09:04:48 localhost kernel: acpiphp: Slot [17] registered
Oct 10 09:04:48 localhost kernel: acpiphp: Slot [18] registered
Oct 10 09:04:48 localhost kernel: acpiphp: Slot [19] registered
Oct 10 09:04:48 localhost kernel: acpiphp: Slot [20] registered
Oct 10 09:04:48 localhost kernel: acpiphp: Slot [21] registered
Oct 10 09:04:48 localhost kernel: acpiphp: Slot [22] registered
Oct 10 09:04:48 localhost kernel: acpiphp: Slot [23] registered
Oct 10 09:04:48 localhost kernel: acpiphp: Slot [24] registered
Oct 10 09:04:48 localhost kernel: acpiphp: Slot [25] registered
Oct 10 09:04:48 localhost kernel: acpiphp: Slot [26] registered
Oct 10 09:04:48 localhost kernel: acpiphp: Slot [27] registered
Oct 10 09:04:48 localhost kernel: acpiphp: Slot [28] registered
Oct 10 09:04:48 localhost kernel: acpiphp: Slot [29] registered
Oct 10 09:04:48 localhost kernel: acpiphp: Slot [30] registered
Oct 10 09:04:48 localhost kernel: acpiphp: Slot [31] registered
Oct 10 09:04:48 localhost kernel: PCI host bridge to bus 0000:00
Oct 10 09:04:48 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct 10 09:04:48 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct 10 09:04:48 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct 10 09:04:48 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct 10 09:04:48 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct 10 09:04:48 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct 10 09:04:48 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct 10 09:04:48 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct 10 09:04:48 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct 10 09:04:48 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Oct 10 09:04:48 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct 10 09:04:48 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct 10 09:04:48 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct 10 09:04:48 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct 10 09:04:48 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct 10 09:04:48 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Oct 10 09:04:48 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct 10 09:04:48 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct 10 09:04:48 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct 10 09:04:48 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct 10 09:04:48 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct 10 09:04:48 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct 10 09:04:48 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct 10 09:04:48 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct 10 09:04:48 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct 10 09:04:48 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 10 09:04:48 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct 10 09:04:48 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct 10 09:04:48 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct 10 09:04:48 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct 10 09:04:48 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct 10 09:04:48 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct 10 09:04:48 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct 10 09:04:48 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct 10 09:04:48 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct 10 09:04:48 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct 10 09:04:48 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct 10 09:04:48 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct 10 09:04:48 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Oct 10 09:04:48 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct 10 09:04:48 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct 10 09:04:48 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct 10 09:04:48 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct 10 09:04:48 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct 10 09:04:48 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct 10 09:04:48 localhost kernel: iommu: Default domain type: Translated
Oct 10 09:04:48 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct 10 09:04:48 localhost kernel: SCSI subsystem initialized
Oct 10 09:04:48 localhost kernel: ACPI: bus type USB registered
Oct 10 09:04:48 localhost kernel: usbcore: registered new interface driver usbfs
Oct 10 09:04:48 localhost kernel: usbcore: registered new interface driver hub
Oct 10 09:04:48 localhost kernel: usbcore: registered new device driver usb
Oct 10 09:04:48 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Oct 10 09:04:48 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct 10 09:04:48 localhost kernel: PTP clock support registered
Oct 10 09:04:48 localhost kernel: EDAC MC: Ver: 3.0.0
Oct 10 09:04:48 localhost kernel: NetLabel: Initializing
Oct 10 09:04:48 localhost kernel: NetLabel:  domain hash size = 128
Oct 10 09:04:48 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct 10 09:04:48 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Oct 10 09:04:48 localhost kernel: PCI: Using ACPI for IRQ routing
Oct 10 09:04:48 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Oct 10 09:04:48 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Oct 10 09:04:48 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Oct 10 09:04:48 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct 10 09:04:48 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct 10 09:04:48 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct 10 09:04:48 localhost kernel: vgaarb: loaded
Oct 10 09:04:48 localhost kernel: clocksource: Switched to clocksource kvm-clock
Oct 10 09:04:48 localhost kernel: VFS: Disk quotas dquot_6.6.0
Oct 10 09:04:48 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct 10 09:04:48 localhost kernel: pnp: PnP ACPI init
Oct 10 09:04:48 localhost kernel: pnp 00:03: [dma 2]
Oct 10 09:04:48 localhost kernel: pnp: PnP ACPI: found 5 devices
Oct 10 09:04:48 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct 10 09:04:48 localhost kernel: NET: Registered PF_INET protocol family
Oct 10 09:04:48 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct 10 09:04:48 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct 10 09:04:48 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct 10 09:04:48 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct 10 09:04:48 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct 10 09:04:48 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct 10 09:04:48 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct 10 09:04:48 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 10 09:04:48 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 10 09:04:48 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct 10 09:04:48 localhost kernel: NET: Registered PF_XDP protocol family
Oct 10 09:04:48 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct 10 09:04:48 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct 10 09:04:48 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct 10 09:04:48 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct 10 09:04:48 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct 10 09:04:48 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct 10 09:04:48 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct 10 09:04:48 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct 10 09:04:48 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 73094 usecs
Oct 10 09:04:48 localhost kernel: PCI: CLS 0 bytes, default 64
Oct 10 09:04:48 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct 10 09:04:48 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct 10 09:04:48 localhost kernel: ACPI: bus type thunderbolt registered
Oct 10 09:04:48 localhost kernel: Trying to unpack rootfs image as initramfs...
Oct 10 09:04:48 localhost kernel: Initialise system trusted keyrings
Oct 10 09:04:48 localhost kernel: Key type blacklist registered
Oct 10 09:04:48 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct 10 09:04:48 localhost kernel: zbud: loaded
Oct 10 09:04:48 localhost kernel: integrity: Platform Keyring initialized
Oct 10 09:04:48 localhost kernel: integrity: Machine keyring initialized
Oct 10 09:04:48 localhost kernel: Freeing initrd memory: 85808K
Oct 10 09:04:48 localhost kernel: NET: Registered PF_ALG protocol family
Oct 10 09:04:48 localhost kernel: xor: automatically using best checksumming function   avx       
Oct 10 09:04:48 localhost kernel: Key type asymmetric registered
Oct 10 09:04:48 localhost kernel: Asymmetric key parser 'x509' registered
Oct 10 09:04:48 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct 10 09:04:48 localhost kernel: io scheduler mq-deadline registered
Oct 10 09:04:48 localhost kernel: io scheduler kyber registered
Oct 10 09:04:48 localhost kernel: io scheduler bfq registered
Oct 10 09:04:48 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct 10 09:04:48 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct 10 09:04:48 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct 10 09:04:48 localhost kernel: ACPI: button: Power Button [PWRF]
Oct 10 09:04:48 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct 10 09:04:48 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct 10 09:04:48 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct 10 09:04:48 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct 10 09:04:48 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct 10 09:04:48 localhost kernel: Non-volatile memory driver v1.3
Oct 10 09:04:48 localhost kernel: rdac: device handler registered
Oct 10 09:04:48 localhost kernel: hp_sw: device handler registered
Oct 10 09:04:48 localhost kernel: emc: device handler registered
Oct 10 09:04:48 localhost kernel: alua: device handler registered
Oct 10 09:04:48 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct 10 09:04:48 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct 10 09:04:48 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct 10 09:04:48 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Oct 10 09:04:48 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct 10 09:04:48 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct 10 09:04:48 localhost kernel: usb usb1: Product: UHCI Host Controller
Oct 10 09:04:48 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-621.el9.x86_64 uhci_hcd
Oct 10 09:04:48 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct 10 09:04:48 localhost kernel: hub 1-0:1.0: USB hub found
Oct 10 09:04:48 localhost kernel: hub 1-0:1.0: 2 ports detected
Oct 10 09:04:48 localhost kernel: usbcore: registered new interface driver usbserial_generic
Oct 10 09:04:48 localhost kernel: usbserial: USB Serial support registered for generic
Oct 10 09:04:48 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct 10 09:04:48 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct 10 09:04:48 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct 10 09:04:48 localhost kernel: mousedev: PS/2 mouse device common for all mice
Oct 10 09:04:48 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Oct 10 09:04:48 localhost kernel: rtc_cmos 00:04: registered as rtc0
Oct 10 09:04:48 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-10-10T09:04:47 UTC (1760087087)
Oct 10 09:04:48 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct 10 09:04:48 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct 10 09:04:48 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Oct 10 09:04:48 localhost kernel: usbcore: registered new interface driver usbhid
Oct 10 09:04:48 localhost kernel: usbhid: USB HID core driver
Oct 10 09:04:48 localhost kernel: drop_monitor: Initializing network drop monitor service
Oct 10 09:04:48 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct 10 09:04:48 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct 10 09:04:48 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct 10 09:04:48 localhost kernel: Initializing XFRM netlink socket
Oct 10 09:04:48 localhost kernel: NET: Registered PF_INET6 protocol family
Oct 10 09:04:48 localhost kernel: Segment Routing with IPv6
Oct 10 09:04:48 localhost kernel: NET: Registered PF_PACKET protocol family
Oct 10 09:04:48 localhost kernel: mpls_gso: MPLS GSO support
Oct 10 09:04:48 localhost kernel: IPI shorthand broadcast: enabled
Oct 10 09:04:48 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Oct 10 09:04:48 localhost kernel: AES CTR mode by8 optimization enabled
Oct 10 09:04:48 localhost kernel: sched_clock: Marking stable (1306004627, 153321865)->(1540933990, -81607498)
Oct 10 09:04:48 localhost kernel: registered taskstats version 1
Oct 10 09:04:48 localhost kernel: Loading compiled-in X.509 certificates
Oct 10 09:04:48 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 72f99a463516b0dfb027e50caab189f607ef1bc9'
Oct 10 09:04:48 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct 10 09:04:48 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct 10 09:04:48 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct 10 09:04:48 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct 10 09:04:48 localhost kernel: Demotion targets for Node 0: null
Oct 10 09:04:48 localhost kernel: page_owner is disabled
Oct 10 09:04:48 localhost kernel: Key type .fscrypt registered
Oct 10 09:04:48 localhost kernel: Key type fscrypt-provisioning registered
Oct 10 09:04:48 localhost kernel: Key type big_key registered
Oct 10 09:04:48 localhost kernel: Key type encrypted registered
Oct 10 09:04:48 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Oct 10 09:04:48 localhost kernel: Loading compiled-in module X.509 certificates
Oct 10 09:04:48 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 72f99a463516b0dfb027e50caab189f607ef1bc9'
Oct 10 09:04:48 localhost kernel: ima: Allocated hash algorithm: sha256
Oct 10 09:04:48 localhost kernel: ima: No architecture policies found
Oct 10 09:04:48 localhost kernel: evm: Initialising EVM extended attributes:
Oct 10 09:04:48 localhost kernel: evm: security.selinux
Oct 10 09:04:48 localhost kernel: evm: security.SMACK64 (disabled)
Oct 10 09:04:48 localhost kernel: evm: security.SMACK64EXEC (disabled)
Oct 10 09:04:48 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct 10 09:04:48 localhost kernel: evm: security.SMACK64MMAP (disabled)
Oct 10 09:04:48 localhost kernel: evm: security.apparmor (disabled)
Oct 10 09:04:48 localhost kernel: evm: security.ima
Oct 10 09:04:48 localhost kernel: evm: security.capability
Oct 10 09:04:48 localhost kernel: evm: HMAC attrs: 0x1
Oct 10 09:04:48 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct 10 09:04:48 localhost kernel: Running certificate verification RSA selftest
Oct 10 09:04:48 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct 10 09:04:48 localhost kernel: Running certificate verification ECDSA selftest
Oct 10 09:04:48 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct 10 09:04:48 localhost kernel: clk: Disabling unused clocks
Oct 10 09:04:48 localhost kernel: Freeing unused decrypted memory: 2028K
Oct 10 09:04:48 localhost kernel: Freeing unused kernel image (initmem) memory: 4188K
Oct 10 09:04:48 localhost kernel: Write protecting the kernel read-only data: 30720k
Oct 10 09:04:48 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 472K
Oct 10 09:04:48 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct 10 09:04:48 localhost kernel: Run /init as init process
Oct 10 09:04:48 localhost kernel:   with arguments:
Oct 10 09:04:48 localhost kernel:     /init
Oct 10 09:04:48 localhost kernel:   with environment:
Oct 10 09:04:48 localhost kernel:     HOME=/
Oct 10 09:04:48 localhost kernel:     TERM=linux
Oct 10 09:04:48 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64
Oct 10 09:04:48 localhost systemd[1]: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 10 09:04:48 localhost systemd[1]: Detected virtualization kvm.
Oct 10 09:04:48 localhost systemd[1]: Detected architecture x86-64.
Oct 10 09:04:48 localhost systemd[1]: Running in initrd.
Oct 10 09:04:48 localhost systemd[1]: No hostname configured, using default hostname.
Oct 10 09:04:48 localhost systemd[1]: Hostname set to <localhost>.
Oct 10 09:04:48 localhost systemd[1]: Initializing machine ID from VM UUID.
Oct 10 09:04:48 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct 10 09:04:48 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct 10 09:04:48 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Oct 10 09:04:48 localhost kernel: usb 1-1: Manufacturer: QEMU
Oct 10 09:04:48 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct 10 09:04:48 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct 10 09:04:48 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct 10 09:04:48 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Oct 10 09:04:48 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 10 09:04:48 localhost systemd[1]: Reached target Local Encrypted Volumes.
Oct 10 09:04:48 localhost systemd[1]: Reached target Initrd /usr File System.
Oct 10 09:04:48 localhost systemd[1]: Reached target Local File Systems.
Oct 10 09:04:48 localhost systemd[1]: Reached target Path Units.
Oct 10 09:04:48 localhost systemd[1]: Reached target Slice Units.
Oct 10 09:04:48 localhost systemd[1]: Reached target Swaps.
Oct 10 09:04:48 localhost systemd[1]: Reached target Timer Units.
Oct 10 09:04:48 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 10 09:04:48 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Oct 10 09:04:48 localhost systemd[1]: Listening on Journal Socket.
Oct 10 09:04:48 localhost systemd[1]: Listening on udev Control Socket.
Oct 10 09:04:48 localhost systemd[1]: Listening on udev Kernel Socket.
Oct 10 09:04:48 localhost systemd[1]: Reached target Socket Units.
Oct 10 09:04:48 localhost systemd[1]: Starting Create List of Static Device Nodes...
Oct 10 09:04:48 localhost systemd[1]: Starting Journal Service...
Oct 10 09:04:48 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 10 09:04:48 localhost systemd[1]: Starting Apply Kernel Variables...
Oct 10 09:04:48 localhost systemd[1]: Starting Create System Users...
Oct 10 09:04:48 localhost systemd[1]: Starting Setup Virtual Console...
Oct 10 09:04:48 localhost systemd[1]: Finished Create List of Static Device Nodes.
Oct 10 09:04:48 localhost systemd[1]: Finished Apply Kernel Variables.
Oct 10 09:04:48 localhost systemd[1]: Finished Create System Users.
Oct 10 09:04:48 localhost systemd-journald[307]: Journal started
Oct 10 09:04:48 localhost systemd-journald[307]: Runtime Journal (/run/log/journal/b3ce59718a214607a1ce4c5a00fcffdd) is 8.0M, max 153.6M, 145.6M free.
Oct 10 09:04:48 localhost systemd-sysusers[312]: Creating group 'users' with GID 100.
Oct 10 09:04:48 localhost systemd-sysusers[312]: Creating group 'dbus' with GID 81.
Oct 10 09:04:48 localhost systemd-sysusers[312]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct 10 09:04:48 localhost systemd[1]: Started Journal Service.
Oct 10 09:04:48 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 10 09:04:48 localhost systemd[1]: Starting Create Volatile Files and Directories...
Oct 10 09:04:48 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 10 09:04:48 localhost systemd[1]: Finished Create Volatile Files and Directories.
Oct 10 09:04:48 localhost systemd[1]: Finished Setup Virtual Console.
Oct 10 09:04:48 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct 10 09:04:48 localhost systemd[1]: Starting dracut cmdline hook...
Oct 10 09:04:48 localhost dracut-cmdline[327]: dracut-9 dracut-057-102.git20250818.el9
Oct 10 09:04:48 localhost dracut-cmdline[327]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-621.el9.x86_64 root=UUID=9839e2e1-98a2-4594-b609-79d514deb0a3 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 10 09:04:48 localhost systemd[1]: Finished dracut cmdline hook.
Oct 10 09:04:48 localhost systemd[1]: Starting dracut pre-udev hook...
Oct 10 09:04:48 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct 10 09:04:48 localhost kernel: device-mapper: uevent: version 1.0.3
Oct 10 09:04:48 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct 10 09:04:48 localhost kernel: RPC: Registered named UNIX socket transport module.
Oct 10 09:04:48 localhost kernel: RPC: Registered udp transport module.
Oct 10 09:04:48 localhost kernel: RPC: Registered tcp transport module.
Oct 10 09:04:48 localhost kernel: RPC: Registered tcp-with-tls transport module.
Oct 10 09:04:48 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct 10 09:04:48 localhost rpc.statd[445]: Version 2.5.4 starting
Oct 10 09:04:48 localhost rpc.statd[445]: Initializing NSM state
Oct 10 09:04:48 localhost rpc.idmapd[450]: Setting log level to 0
Oct 10 09:04:48 localhost systemd[1]: Finished dracut pre-udev hook.
Oct 10 09:04:48 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 10 09:04:48 localhost systemd-udevd[463]: Using default interface naming scheme 'rhel-9.0'.
Oct 10 09:04:48 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 10 09:04:48 localhost systemd[1]: Starting dracut pre-trigger hook...
Oct 10 09:04:48 localhost systemd[1]: Finished dracut pre-trigger hook.
Oct 10 09:04:48 localhost systemd[1]: Starting Coldplug All udev Devices...
Oct 10 09:04:49 localhost systemd[1]: Created slice Slice /system/modprobe.
Oct 10 09:04:49 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 10 09:04:49 localhost systemd[1]: Finished Coldplug All udev Devices.
Oct 10 09:04:49 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 10 09:04:49 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 10 09:04:49 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 10 09:04:49 localhost systemd[1]: Reached target Network.
Oct 10 09:04:49 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 10 09:04:49 localhost systemd[1]: Starting dracut initqueue hook...
Oct 10 09:04:49 localhost systemd[1]: Mounting Kernel Configuration File System...
Oct 10 09:04:49 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct 10 09:04:49 localhost systemd[1]: Mounted Kernel Configuration File System.
Oct 10 09:04:49 localhost systemd[1]: Reached target System Initialization.
Oct 10 09:04:49 localhost systemd[1]: Reached target Basic System.
Oct 10 09:04:49 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct 10 09:04:49 localhost kernel:  vda: vda1
Oct 10 09:04:49 localhost kernel: libata version 3.00 loaded.
Oct 10 09:04:49 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Oct 10 09:04:49 localhost kernel: scsi host0: ata_piix
Oct 10 09:04:49 localhost kernel: scsi host1: ata_piix
Oct 10 09:04:49 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Oct 10 09:04:49 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Oct 10 09:04:49 localhost systemd[1]: Found device /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3.
Oct 10 09:04:49 localhost systemd-udevd[468]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 09:04:49 localhost systemd[1]: Reached target Initrd Root Device.
Oct 10 09:04:49 localhost kernel: ata1: found unknown device (class 0)
Oct 10 09:04:49 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct 10 09:04:49 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct 10 09:04:49 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct 10 09:04:49 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct 10 09:04:49 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct 10 09:04:49 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Oct 10 09:04:49 localhost systemd[1]: Finished dracut initqueue hook.
Oct 10 09:04:49 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Oct 10 09:04:49 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Oct 10 09:04:49 localhost systemd[1]: Reached target Remote File Systems.
Oct 10 09:04:49 localhost systemd[1]: Starting dracut pre-mount hook...
Oct 10 09:04:49 localhost systemd[1]: Finished dracut pre-mount hook.
Oct 10 09:04:49 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3...
Oct 10 09:04:49 localhost systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Oct 10 09:04:49 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/9839e2e1-98a2-4594-b609-79d514deb0a3.
Oct 10 09:04:49 localhost systemd[1]: Mounting /sysroot...
Oct 10 09:04:50 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct 10 09:04:50 localhost kernel: XFS (vda1): Mounting V5 Filesystem 9839e2e1-98a2-4594-b609-79d514deb0a3
Oct 10 09:04:50 localhost kernel: XFS (vda1): Ending clean mount
Oct 10 09:04:50 localhost systemd[1]: Mounted /sysroot.
Oct 10 09:04:50 localhost systemd[1]: Reached target Initrd Root File System.
Oct 10 09:04:50 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct 10 09:04:50 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct 10 09:04:50 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct 10 09:04:50 localhost systemd[1]: Reached target Initrd File Systems.
Oct 10 09:04:50 localhost systemd[1]: Reached target Initrd Default Target.
Oct 10 09:04:50 localhost systemd[1]: Starting dracut mount hook...
Oct 10 09:04:50 localhost systemd[1]: Finished dracut mount hook.
Oct 10 09:04:50 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct 10 09:04:50 localhost rpc.idmapd[450]: exiting on signal 15
Oct 10 09:04:50 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct 10 09:04:50 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct 10 09:04:50 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct 10 09:04:50 localhost systemd[1]: Stopped target Network.
Oct 10 09:04:50 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Oct 10 09:04:50 localhost systemd[1]: Stopped target Timer Units.
Oct 10 09:04:50 localhost systemd[1]: dbus.socket: Deactivated successfully.
Oct 10 09:04:50 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Oct 10 09:04:50 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct 10 09:04:50 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct 10 09:04:50 localhost systemd[1]: Stopped target Initrd Default Target.
Oct 10 09:04:50 localhost systemd[1]: Stopped target Basic System.
Oct 10 09:04:50 localhost systemd[1]: Stopped target Initrd Root Device.
Oct 10 09:04:50 localhost systemd[1]: Stopped target Initrd /usr File System.
Oct 10 09:04:50 localhost systemd[1]: Stopped target Path Units.
Oct 10 09:04:50 localhost systemd[1]: Stopped target Remote File Systems.
Oct 10 09:04:50 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Oct 10 09:04:50 localhost systemd[1]: Stopped target Slice Units.
Oct 10 09:04:50 localhost systemd[1]: Stopped target Socket Units.
Oct 10 09:04:50 localhost systemd[1]: Stopped target System Initialization.
Oct 10 09:04:50 localhost systemd[1]: Stopped target Local File Systems.
Oct 10 09:04:50 localhost systemd[1]: Stopped target Swaps.
Oct 10 09:04:50 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Oct 10 09:04:50 localhost systemd[1]: Stopped dracut mount hook.
Oct 10 09:04:50 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct 10 09:04:50 localhost systemd[1]: Stopped dracut pre-mount hook.
Oct 10 09:04:50 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Oct 10 09:04:50 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct 10 09:04:50 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct 10 09:04:50 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct 10 09:04:50 localhost systemd[1]: Stopped dracut initqueue hook.
Oct 10 09:04:50 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 10 09:04:50 localhost systemd[1]: Stopped Apply Kernel Variables.
Oct 10 09:04:50 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct 10 09:04:50 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Oct 10 09:04:50 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct 10 09:04:50 localhost systemd[1]: Stopped Coldplug All udev Devices.
Oct 10 09:04:50 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct 10 09:04:50 localhost systemd[1]: Stopped dracut pre-trigger hook.
Oct 10 09:04:50 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct 10 09:04:50 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct 10 09:04:50 localhost systemd[1]: Stopped Setup Virtual Console.
Oct 10 09:04:50 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct 10 09:04:50 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 10 09:04:50 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct 10 09:04:50 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct 10 09:04:50 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct 10 09:04:50 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct 10 09:04:50 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct 10 09:04:50 localhost systemd[1]: Closed udev Control Socket.
Oct 10 09:04:50 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct 10 09:04:50 localhost systemd[1]: Closed udev Kernel Socket.
Oct 10 09:04:50 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct 10 09:04:50 localhost systemd[1]: Stopped dracut pre-udev hook.
Oct 10 09:04:50 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct 10 09:04:50 localhost systemd[1]: Stopped dracut cmdline hook.
Oct 10 09:04:50 localhost systemd[1]: Starting Cleanup udev Database...
Oct 10 09:04:50 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct 10 09:04:50 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct 10 09:04:50 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct 10 09:04:50 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Oct 10 09:04:50 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct 10 09:04:50 localhost systemd[1]: Stopped Create System Users.
Oct 10 09:04:50 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct 10 09:04:50 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct 10 09:04:50 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct 10 09:04:50 localhost systemd[1]: Finished Cleanup udev Database.
Oct 10 09:04:50 localhost systemd[1]: Reached target Switch Root.
Oct 10 09:04:50 localhost systemd[1]: Starting Switch Root...
Oct 10 09:04:50 localhost systemd[1]: Switching root.
Oct 10 09:04:50 localhost systemd-journald[307]: Journal stopped
Oct 10 09:04:51 localhost systemd-journald[307]: Received SIGTERM from PID 1 (systemd).
Oct 10 09:04:51 localhost kernel: audit: type=1404 audit(1760087090.673:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct 10 09:04:51 localhost kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 09:04:51 localhost kernel: SELinux:  policy capability open_perms=1
Oct 10 09:04:51 localhost kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 09:04:51 localhost kernel: SELinux:  policy capability always_check_network=0
Oct 10 09:04:51 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 09:04:51 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 09:04:51 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 09:04:51 localhost kernel: audit: type=1403 audit(1760087090.842:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct 10 09:04:51 localhost systemd[1]: Successfully loaded SELinux policy in 175.716ms.
Oct 10 09:04:51 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.036ms.
Oct 10 09:04:51 localhost systemd[1]: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 10 09:04:51 localhost systemd[1]: Detected virtualization kvm.
Oct 10 09:04:51 localhost systemd[1]: Detected architecture x86-64.
Oct 10 09:04:51 localhost systemd-rc-local-generator[637]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:04:51 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Oct 10 09:04:51 localhost systemd[1]: Stopped Switch Root.
Oct 10 09:04:51 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct 10 09:04:51 localhost systemd[1]: Created slice Slice /system/getty.
Oct 10 09:04:51 localhost systemd[1]: Created slice Slice /system/serial-getty.
Oct 10 09:04:51 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Oct 10 09:04:51 localhost systemd[1]: Created slice User and Session Slice.
Oct 10 09:04:51 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 10 09:04:51 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Oct 10 09:04:51 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct 10 09:04:51 localhost systemd[1]: Reached target Local Encrypted Volumes.
Oct 10 09:04:51 localhost systemd[1]: Stopped target Switch Root.
Oct 10 09:04:51 localhost systemd[1]: Stopped target Initrd File Systems.
Oct 10 09:04:51 localhost systemd[1]: Stopped target Initrd Root File System.
Oct 10 09:04:51 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Oct 10 09:04:51 localhost systemd[1]: Reached target Path Units.
Oct 10 09:04:51 localhost systemd[1]: Reached target rpc_pipefs.target.
Oct 10 09:04:51 localhost systemd[1]: Reached target Slice Units.
Oct 10 09:04:51 localhost systemd[1]: Reached target Swaps.
Oct 10 09:04:51 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Oct 10 09:04:51 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Oct 10 09:04:51 localhost systemd[1]: Reached target RPC Port Mapper.
Oct 10 09:04:51 localhost systemd[1]: Listening on Process Core Dump Socket.
Oct 10 09:04:51 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Oct 10 09:04:51 localhost systemd[1]: Listening on udev Control Socket.
Oct 10 09:04:51 localhost systemd[1]: Listening on udev Kernel Socket.
Oct 10 09:04:51 localhost systemd[1]: Mounting Huge Pages File System...
Oct 10 09:04:51 localhost systemd[1]: Mounting POSIX Message Queue File System...
Oct 10 09:04:51 localhost systemd[1]: Mounting Kernel Debug File System...
Oct 10 09:04:51 localhost systemd[1]: Mounting Kernel Trace File System...
Oct 10 09:04:51 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 10 09:04:51 localhost systemd[1]: Starting Create List of Static Device Nodes...
Oct 10 09:04:51 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 10 09:04:51 localhost systemd[1]: Starting Load Kernel Module drm...
Oct 10 09:04:51 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Oct 10 09:04:51 localhost systemd[1]: Starting Load Kernel Module fuse...
Oct 10 09:04:51 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct 10 09:04:51 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Oct 10 09:04:51 localhost systemd[1]: Stopped File System Check on Root Device.
Oct 10 09:04:51 localhost systemd[1]: Stopped Journal Service.
Oct 10 09:04:51 localhost systemd[1]: Starting Journal Service...
Oct 10 09:04:51 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 10 09:04:51 localhost systemd[1]: Starting Generate network units from Kernel command line...
Oct 10 09:04:51 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 10 09:04:51 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Oct 10 09:04:51 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct 10 09:04:51 localhost systemd[1]: Starting Apply Kernel Variables...
Oct 10 09:04:51 localhost systemd[1]: Starting Coldplug All udev Devices...
Oct 10 09:04:51 localhost systemd[1]: Mounted Huge Pages File System.
Oct 10 09:04:51 localhost systemd[1]: Mounted POSIX Message Queue File System.
Oct 10 09:04:51 localhost systemd-journald[678]: Journal started
Oct 10 09:04:51 localhost systemd-journald[678]: Runtime Journal (/run/log/journal/a1727ec20198bc6caf436a6e13c4ff5e) is 8.0M, max 153.6M, 145.6M free.
Oct 10 09:04:51 localhost systemd[1]: Queued start job for default target Multi-User System.
Oct 10 09:04:51 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Oct 10 09:04:51 localhost systemd[1]: Started Journal Service.
Oct 10 09:04:51 localhost systemd[1]: Mounted Kernel Debug File System.
Oct 10 09:04:51 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct 10 09:04:51 localhost systemd[1]: Mounted Kernel Trace File System.
Oct 10 09:04:51 localhost kernel: ACPI: bus type drm_connector registered
Oct 10 09:04:51 localhost kernel: fuse: init (API version 7.37)
Oct 10 09:04:51 localhost systemd[1]: Finished Create List of Static Device Nodes.
Oct 10 09:04:51 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 10 09:04:51 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 10 09:04:51 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct 10 09:04:51 localhost systemd[1]: Finished Load Kernel Module drm.
Oct 10 09:04:51 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct 10 09:04:51 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Oct 10 09:04:51 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct 10 09:04:51 localhost systemd[1]: Finished Load Kernel Module fuse.
Oct 10 09:04:51 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct 10 09:04:51 localhost systemd[1]: Finished Generate network units from Kernel command line.
Oct 10 09:04:51 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Oct 10 09:04:51 localhost systemd[1]: Finished Apply Kernel Variables.
Oct 10 09:04:51 localhost systemd[1]: Mounting FUSE Control File System...
Oct 10 09:04:51 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 10 09:04:51 localhost systemd[1]: Starting Rebuild Hardware Database...
Oct 10 09:04:51 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Oct 10 09:04:51 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct 10 09:04:51 localhost systemd[1]: Starting Load/Save OS Random Seed...
Oct 10 09:04:51 localhost systemd[1]: Starting Create System Users...
Oct 10 09:04:51 localhost systemd[1]: Mounted FUSE Control File System.
Oct 10 09:04:51 localhost systemd-journald[678]: Runtime Journal (/run/log/journal/a1727ec20198bc6caf436a6e13c4ff5e) is 8.0M, max 153.6M, 145.6M free.
Oct 10 09:04:51 localhost systemd-journald[678]: Received client request to flush runtime journal.
Oct 10 09:04:51 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Oct 10 09:04:51 localhost systemd[1]: Finished Load/Save OS Random Seed.
Oct 10 09:04:51 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 10 09:04:51 localhost systemd[1]: Finished Coldplug All udev Devices.
Oct 10 09:04:51 localhost systemd[1]: Finished Create System Users.
Oct 10 09:04:51 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 10 09:04:51 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 10 09:04:51 localhost systemd[1]: Reached target Preparation for Local File Systems.
Oct 10 09:04:51 localhost systemd[1]: Reached target Local File Systems.
Oct 10 09:04:51 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Oct 10 09:04:51 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct 10 09:04:51 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct 10 09:04:51 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct 10 09:04:51 localhost systemd[1]: Starting Automatic Boot Loader Update...
Oct 10 09:04:51 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct 10 09:04:51 localhost systemd[1]: Starting Create Volatile Files and Directories...
Oct 10 09:04:51 localhost bootctl[696]: Couldn't find EFI system partition, skipping.
Oct 10 09:04:51 localhost systemd[1]: Finished Automatic Boot Loader Update.
Oct 10 09:04:51 localhost systemd[1]: Finished Create Volatile Files and Directories.
Oct 10 09:04:51 localhost systemd[1]: Starting Security Auditing Service...
Oct 10 09:04:51 localhost systemd[1]: Starting RPC Bind...
Oct 10 09:04:51 localhost systemd[1]: Starting Rebuild Journal Catalog...
Oct 10 09:04:51 localhost auditd[702]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct 10 09:04:51 localhost auditd[702]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct 10 09:04:51 localhost systemd[1]: Finished Rebuild Journal Catalog.
Oct 10 09:04:51 localhost systemd[1]: Started RPC Bind.
Oct 10 09:04:51 localhost augenrules[707]: /sbin/augenrules: No change
Oct 10 09:04:51 localhost augenrules[722]: No rules
Oct 10 09:04:51 localhost augenrules[722]: enabled 1
Oct 10 09:04:51 localhost augenrules[722]: failure 1
Oct 10 09:04:51 localhost augenrules[722]: pid 702
Oct 10 09:04:51 localhost augenrules[722]: rate_limit 0
Oct 10 09:04:51 localhost augenrules[722]: backlog_limit 8192
Oct 10 09:04:51 localhost augenrules[722]: lost 0
Oct 10 09:04:51 localhost augenrules[722]: backlog 3
Oct 10 09:04:51 localhost augenrules[722]: backlog_wait_time 60000
Oct 10 09:04:51 localhost augenrules[722]: backlog_wait_time_actual 0
Oct 10 09:04:51 localhost augenrules[722]: enabled 1
Oct 10 09:04:51 localhost augenrules[722]: failure 1
Oct 10 09:04:51 localhost augenrules[722]: pid 702
Oct 10 09:04:51 localhost augenrules[722]: rate_limit 0
Oct 10 09:04:51 localhost augenrules[722]: backlog_limit 8192
Oct 10 09:04:51 localhost augenrules[722]: lost 0
Oct 10 09:04:51 localhost augenrules[722]: backlog 0
Oct 10 09:04:51 localhost augenrules[722]: backlog_wait_time 60000
Oct 10 09:04:51 localhost augenrules[722]: backlog_wait_time_actual 0
Oct 10 09:04:51 localhost augenrules[722]: enabled 1
Oct 10 09:04:51 localhost augenrules[722]: failure 1
Oct 10 09:04:51 localhost augenrules[722]: pid 702
Oct 10 09:04:51 localhost augenrules[722]: rate_limit 0
Oct 10 09:04:51 localhost augenrules[722]: backlog_limit 8192
Oct 10 09:04:51 localhost augenrules[722]: lost 0
Oct 10 09:04:51 localhost augenrules[722]: backlog 4
Oct 10 09:04:51 localhost augenrules[722]: backlog_wait_time 60000
Oct 10 09:04:51 localhost augenrules[722]: backlog_wait_time_actual 0
Oct 10 09:04:51 localhost systemd[1]: Started Security Auditing Service.
Oct 10 09:04:51 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct 10 09:04:51 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct 10 09:04:52 localhost systemd[1]: Finished Rebuild Hardware Database.
Oct 10 09:04:52 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 10 09:04:52 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Oct 10 09:04:52 localhost systemd[1]: Starting Update is Completed...
Oct 10 09:04:52 localhost systemd-udevd[730]: Using default interface naming scheme 'rhel-9.0'.
Oct 10 09:04:52 localhost systemd[1]: Finished Update is Completed.
Oct 10 09:04:52 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 10 09:04:52 localhost systemd[1]: Reached target System Initialization.
Oct 10 09:04:52 localhost systemd[1]: Started dnf makecache --timer.
Oct 10 09:04:52 localhost systemd[1]: Started Daily rotation of log files.
Oct 10 09:04:52 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct 10 09:04:52 localhost systemd[1]: Reached target Timer Units.
Oct 10 09:04:52 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 10 09:04:52 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct 10 09:04:52 localhost systemd[1]: Reached target Socket Units.
Oct 10 09:04:52 localhost systemd[1]: Starting D-Bus System Message Bus...
Oct 10 09:04:52 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 10 09:04:52 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct 10 09:04:52 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 10 09:04:52 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 10 09:04:52 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 10 09:04:52 localhost systemd-udevd[747]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 09:04:52 localhost systemd[1]: Started D-Bus System Message Bus.
Oct 10 09:04:52 localhost systemd[1]: Reached target Basic System.
Oct 10 09:04:52 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct 10 09:04:52 localhost dbus-broker-lau[759]: Ready
Oct 10 09:04:52 localhost systemd[1]: Starting NTP client/server...
Oct 10 09:04:52 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct 10 09:04:52 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct 10 09:04:52 localhost systemd[1]: Starting IPv4 firewall with iptables...
Oct 10 09:04:52 localhost systemd[1]: Started irqbalance daemon.
Oct 10 09:04:52 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct 10 09:04:52 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 10 09:04:52 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 10 09:04:52 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 10 09:04:52 localhost systemd[1]: Reached target sshd-keygen.target.
Oct 10 09:04:52 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Oct 10 09:04:52 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct 10 09:04:52 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct 10 09:04:52 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct 10 09:04:52 localhost systemd[1]: Reached target User and Group Name Lookups.
Oct 10 09:04:52 localhost chronyd[795]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 10 09:04:52 localhost chronyd[795]: Loaded 0 symmetric keys
Oct 10 09:04:52 localhost chronyd[795]: Using right/UTC timezone to obtain leap second data
Oct 10 09:04:52 localhost chronyd[795]: Loaded seccomp filter (level 2)
Oct 10 09:04:52 localhost systemd[1]: Starting User Login Management...
Oct 10 09:04:52 localhost systemd[1]: Started NTP client/server.
Oct 10 09:04:52 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct 10 09:04:52 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct 10 09:04:52 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Oct 10 09:04:52 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Oct 10 09:04:52 localhost kernel: Console: switching to colour dummy device 80x25
Oct 10 09:04:52 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct 10 09:04:52 localhost kernel: [drm] features: -context_init
Oct 10 09:04:52 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct 10 09:04:52 localhost kernel: [drm] number of scanouts: 1
Oct 10 09:04:52 localhost kernel: [drm] number of cap sets: 0
Oct 10 09:04:52 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Oct 10 09:04:52 localhost systemd-logind[789]: New seat seat0.
Oct 10 09:04:52 localhost systemd-logind[789]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 10 09:04:52 localhost systemd-logind[789]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 10 09:04:52 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct 10 09:04:52 localhost kernel: Console: switching to colour frame buffer device 128x48
Oct 10 09:04:52 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct 10 09:04:52 localhost systemd[1]: Started User Login Management.
Oct 10 09:04:52 localhost kernel: kvm_amd: TSC scaling supported
Oct 10 09:04:52 localhost kernel: kvm_amd: Nested Virtualization enabled
Oct 10 09:04:52 localhost kernel: kvm_amd: Nested Paging enabled
Oct 10 09:04:52 localhost kernel: kvm_amd: LBR virtualization supported
Oct 10 09:04:52 localhost iptables.init[778]: iptables: Applying firewall rules: [  OK  ]
Oct 10 09:04:52 localhost systemd[1]: Finished IPv4 firewall with iptables.
Oct 10 09:04:53 localhost cloud-init[839]: Cloud-init v. 24.4-7.el9 running 'init-local' at Fri, 10 Oct 2025 09:04:53 +0000. Up 6.79 seconds.
Oct 10 09:04:53 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Oct 10 09:04:53 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Oct 10 09:04:53 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpx0gvtun6.mount: Deactivated successfully.
Oct 10 09:04:53 localhost systemd[1]: Starting Hostname Service...
Oct 10 09:04:53 localhost systemd[1]: Started Hostname Service.
Oct 10 09:04:53 np0005479822.novalocal systemd-hostnamed[853]: Hostname set to <np0005479822.novalocal> (static)
Oct 10 09:04:53 np0005479822.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct 10 09:04:53 np0005479822.novalocal systemd[1]: Reached target Preparation for Network.
Oct 10 09:04:53 np0005479822.novalocal systemd[1]: Starting Network Manager...
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.6836] NetworkManager (version 1.54.1-1.el9) is starting... (boot:fb56a8ec-12f4-4a91-b74d-e8ffc8e6ce0c)
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.6841] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7013] manager[0x5637f32b3080]: monitoring kernel firmware directory '/lib/firmware'.
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7092] hostname: hostname: using hostnamed
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7092] hostname: static hostname changed from (none) to "np0005479822.novalocal"
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7096] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7250] manager[0x5637f32b3080]: rfkill: Wi-Fi hardware radio set enabled
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7251] manager[0x5637f32b3080]: rfkill: WWAN hardware radio set enabled
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7337] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7338] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7338] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7339] manager: Networking is enabled by state file
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7341] settings: Loaded settings plugin: keyfile (internal)
Oct 10 09:04:53 np0005479822.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7406] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7429] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7450] dhcp: init: Using DHCP client 'internal'
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7453] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7464] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7475] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7481] device (lo): Activation: starting connection 'lo' (da285bad-fb13-45e9-93ce-582789837c7a)
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7489] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7491] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7516] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7519] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7520] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7522] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7524] device (eth0): carrier: link connected
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7526] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7531] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7536] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7539] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7539] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7541] manager: NetworkManager state is now CONNECTING
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7542] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7547] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7550] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7593] dhcp4 (eth0): state changed new lease, address=38.102.83.20
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7599] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7614] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 09:04:53 np0005479822.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 10 09:04:53 np0005479822.novalocal systemd[1]: Started Network Manager.
Oct 10 09:04:53 np0005479822.novalocal systemd[1]: Reached target Network.
Oct 10 09:04:53 np0005479822.novalocal systemd[1]: Starting Network Manager Wait Online...
Oct 10 09:04:53 np0005479822.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Oct 10 09:04:53 np0005479822.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7876] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7880] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7882] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7888] device (lo): Activation: successful, device activated.
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7895] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7898] manager: NetworkManager state is now CONNECTED_SITE
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7901] device (eth0): Activation: successful, device activated.
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7906] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 10 09:04:53 np0005479822.novalocal NetworkManager[857]: <info>  [1760087093.7909] manager: startup complete
Oct 10 09:04:53 np0005479822.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Oct 10 09:04:53 np0005479822.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 10 09:04:53 np0005479822.novalocal systemd[1]: Reached target NFS client services.
Oct 10 09:04:53 np0005479822.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Oct 10 09:04:53 np0005479822.novalocal systemd[1]: Reached target Remote File Systems.
Oct 10 09:04:53 np0005479822.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 10 09:04:53 np0005479822.novalocal systemd[1]: Finished Network Manager Wait Online.
Oct 10 09:04:53 np0005479822.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Oct 10 09:04:54 np0005479822.novalocal cloud-init[922]: Cloud-init v. 24.4-7.el9 running 'init' at Fri, 10 Oct 2025 09:04:54 +0000. Up 7.88 seconds.
Oct 10 09:04:54 np0005479822.novalocal cloud-init[922]: ci-info: ++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Oct 10 09:04:54 np0005479822.novalocal cloud-init[922]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Oct 10 09:04:54 np0005479822.novalocal cloud-init[922]: ci-info: | Device |  Up  |           Address           |      Mask     | Scope  |     Hw-Address    |
Oct 10 09:04:54 np0005479822.novalocal cloud-init[922]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Oct 10 09:04:54 np0005479822.novalocal cloud-init[922]: ci-info: |  eth0  | True |         38.102.83.20        | 255.255.255.0 | global | fa:16:3e:c5:0d:38 |
Oct 10 09:04:54 np0005479822.novalocal cloud-init[922]: ci-info: |  eth0  | True | fe80::f816:3eff:fec5:d38/64 |       .       |  link  | fa:16:3e:c5:0d:38 |
Oct 10 09:04:54 np0005479822.novalocal cloud-init[922]: ci-info: |   lo   | True |          127.0.0.1          |   255.0.0.0   |  host  |         .         |
Oct 10 09:04:54 np0005479822.novalocal cloud-init[922]: ci-info: |   lo   | True |           ::1/128           |       .       |  host  |         .         |
Oct 10 09:04:54 np0005479822.novalocal cloud-init[922]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Oct 10 09:04:54 np0005479822.novalocal cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Oct 10 09:04:54 np0005479822.novalocal cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 10 09:04:54 np0005479822.novalocal cloud-init[922]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Oct 10 09:04:54 np0005479822.novalocal cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 10 09:04:54 np0005479822.novalocal cloud-init[922]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Oct 10 09:04:54 np0005479822.novalocal cloud-init[922]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Oct 10 09:04:54 np0005479822.novalocal cloud-init[922]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Oct 10 09:04:54 np0005479822.novalocal cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct 10 09:04:54 np0005479822.novalocal cloud-init[922]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct 10 09:04:54 np0005479822.novalocal cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 10 09:04:54 np0005479822.novalocal cloud-init[922]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct 10 09:04:54 np0005479822.novalocal cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 10 09:04:54 np0005479822.novalocal cloud-init[922]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Oct 10 09:04:54 np0005479822.novalocal cloud-init[922]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Oct 10 09:04:54 np0005479822.novalocal cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 10 09:04:55 np0005479822.novalocal useradd[989]: new group: name=cloud-user, GID=1001
Oct 10 09:04:55 np0005479822.novalocal useradd[989]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Oct 10 09:04:55 np0005479822.novalocal useradd[989]: add 'cloud-user' to group 'adm'
Oct 10 09:04:55 np0005479822.novalocal useradd[989]: add 'cloud-user' to group 'systemd-journal'
Oct 10 09:04:55 np0005479822.novalocal useradd[989]: add 'cloud-user' to shadow group 'adm'
Oct 10 09:04:55 np0005479822.novalocal useradd[989]: add 'cloud-user' to shadow group 'systemd-journal'
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: Generating public/private rsa key pair.
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: The key fingerprint is:
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: SHA256:g3CHQSjxteelLmbilWjb94SN/JociuetcndYLvv3Iwc root@np0005479822.novalocal
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: The key's randomart image is:
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: +---[RSA 3072]----+
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: |  .. o+          |
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: |  .... +         |
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: |   .o + o .      |
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: |     o = o       |
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: |      . S        |
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: |     . + =. E    |
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: |    + * *+o  .   |
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: |   o.Oo===o o o  |
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: |    +==+*B+. +.. |
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: +----[SHA256]-----+
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: Generating public/private ecdsa key pair.
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: The key fingerprint is:
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: SHA256:tCnXzgijmRuBp7Wqe0xuaPNgbnoKDvhNpO1wHQ5UY64 root@np0005479822.novalocal
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: The key's randomart image is:
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: +---[ECDSA 256]---+
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: |      +          |
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: |     + .         |
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: |    . . .        |
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: |   o . . +       |
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: |  . E = S .      |
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: |. .B X * +       |
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: |+B+ X o . o      |
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: |B=BB o           |
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: |OX=.+            |
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: +----[SHA256]-----+
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: Generating public/private ed25519 key pair.
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: The key fingerprint is:
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: SHA256:olpr5gMCN9xU4nDpDbOa35Uc2XGLsvtuw3v6LGO0SRY root@np0005479822.novalocal
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: The key's randomart image is:
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: +--[ED25519 256]--+
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: |  . oo.          |
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: |   +=.    . .    |
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: | . +.=   o + .   |
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: |. + + . + E .    |
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: |.. +  ..S= .     |
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: |. +  . .= +      |
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: | . oo. . * o     |
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: |   o=.. . X..    |
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: |  .+o.   ==Oo    |
Oct 10 09:04:55 np0005479822.novalocal cloud-init[922]: +----[SHA256]-----+
Oct 10 09:04:55 np0005479822.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Oct 10 09:04:55 np0005479822.novalocal systemd[1]: Reached target Cloud-config availability.
Oct 10 09:04:55 np0005479822.novalocal systemd[1]: Reached target Network is Online.
Oct 10 09:04:55 np0005479822.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Oct 10 09:04:55 np0005479822.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Oct 10 09:04:55 np0005479822.novalocal systemd[1]: Starting System Logging Service...
Oct 10 09:04:55 np0005479822.novalocal sm-notify[1004]: Version 2.5.4 starting
Oct 10 09:04:55 np0005479822.novalocal systemd[1]: Starting OpenSSH server daemon...
Oct 10 09:04:55 np0005479822.novalocal systemd[1]: Starting Permit User Sessions...
Oct 10 09:04:55 np0005479822.novalocal systemd[1]: Started Notify NFS peers of a restart.
Oct 10 09:04:55 np0005479822.novalocal sshd[1006]: Server listening on 0.0.0.0 port 22.
Oct 10 09:04:55 np0005479822.novalocal sshd[1006]: Server listening on :: port 22.
Oct 10 09:04:55 np0005479822.novalocal systemd[1]: Started OpenSSH server daemon.
Oct 10 09:04:55 np0005479822.novalocal systemd[1]: Finished Permit User Sessions.
Oct 10 09:04:55 np0005479822.novalocal systemd[1]: Started Command Scheduler.
Oct 10 09:04:55 np0005479822.novalocal systemd[1]: Started Getty on tty1.
Oct 10 09:04:55 np0005479822.novalocal rsyslogd[1005]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1005" x-info="https://www.rsyslog.com"] start
Oct 10 09:04:55 np0005479822.novalocal rsyslogd[1005]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Oct 10 09:04:55 np0005479822.novalocal systemd[1]: Started Serial Getty on ttyS0.
Oct 10 09:04:55 np0005479822.novalocal systemd[1]: Reached target Login Prompts.
Oct 10 09:04:55 np0005479822.novalocal systemd[1]: Started System Logging Service.
Oct 10 09:04:55 np0005479822.novalocal systemd[1]: Reached target Multi-User System.
Oct 10 09:04:55 np0005479822.novalocal crond[1008]: (CRON) STARTUP (1.5.7)
Oct 10 09:04:55 np0005479822.novalocal crond[1008]: (CRON) INFO (Syslog will be used instead of sendmail.)
Oct 10 09:04:55 np0005479822.novalocal crond[1008]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 56% if used.)
Oct 10 09:04:55 np0005479822.novalocal crond[1008]: (CRON) INFO (running with inotify support)
Oct 10 09:04:55 np0005479822.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Oct 10 09:04:55 np0005479822.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct 10 09:04:55 np0005479822.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Oct 10 09:04:55 np0005479822.novalocal rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 09:04:55 np0005479822.novalocal cloud-init[1018]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Fri, 10 Oct 2025 09:04:55 +0000. Up 9.51 seconds.
Oct 10 09:04:55 np0005479822.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Oct 10 09:04:55 np0005479822.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Oct 10 09:04:55 np0005479822.novalocal sshd-session[1020]: Connection reset by 38.102.83.114 port 48534 [preauth]
Oct 10 09:04:55 np0005479822.novalocal sshd-session[1022]: Unable to negotiate with 38.102.83.114 port 48550: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Oct 10 09:04:55 np0005479822.novalocal sshd-session[1024]: Connection reset by 38.102.83.114 port 48554 [preauth]
Oct 10 09:04:55 np0005479822.novalocal sshd-session[1026]: Unable to negotiate with 38.102.83.114 port 48558: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Oct 10 09:04:55 np0005479822.novalocal sshd-session[1028]: Unable to negotiate with 38.102.83.114 port 48564: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Oct 10 09:04:56 np0005479822.novalocal sshd-session[1032]: Connection closed by 38.102.83.114 port 48580 [preauth]
Oct 10 09:04:56 np0005479822.novalocal sshd-session[1034]: Unable to negotiate with 38.102.83.114 port 48590: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Oct 10 09:04:56 np0005479822.novalocal sshd-session[1036]: Unable to negotiate with 38.102.83.114 port 48604: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Oct 10 09:04:56 np0005479822.novalocal sshd-session[1030]: Connection closed by 38.102.83.114 port 48572 [preauth]
Oct 10 09:04:56 np0005479822.novalocal cloud-init[1040]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Fri, 10 Oct 2025 09:04:56 +0000. Up 9.91 seconds.
Oct 10 09:04:56 np0005479822.novalocal cloud-init[1042]: #############################################################
Oct 10 09:04:56 np0005479822.novalocal cloud-init[1043]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Oct 10 09:04:56 np0005479822.novalocal cloud-init[1045]: 256 SHA256:tCnXzgijmRuBp7Wqe0xuaPNgbnoKDvhNpO1wHQ5UY64 root@np0005479822.novalocal (ECDSA)
Oct 10 09:04:56 np0005479822.novalocal cloud-init[1047]: 256 SHA256:olpr5gMCN9xU4nDpDbOa35Uc2XGLsvtuw3v6LGO0SRY root@np0005479822.novalocal (ED25519)
Oct 10 09:04:56 np0005479822.novalocal cloud-init[1049]: 3072 SHA256:g3CHQSjxteelLmbilWjb94SN/JociuetcndYLvv3Iwc root@np0005479822.novalocal (RSA)
Oct 10 09:04:56 np0005479822.novalocal cloud-init[1050]: -----END SSH HOST KEY FINGERPRINTS-----
Oct 10 09:04:56 np0005479822.novalocal cloud-init[1051]: #############################################################
Oct 10 09:04:56 np0005479822.novalocal cloud-init[1040]: Cloud-init v. 24.4-7.el9 finished at Fri, 10 Oct 2025 09:04:56 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.09 seconds
Oct 10 09:04:56 np0005479822.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Oct 10 09:04:56 np0005479822.novalocal systemd[1]: Reached target Cloud-init target.
Oct 10 09:04:56 np0005479822.novalocal systemd[1]: Startup finished in 1.685s (kernel) + 2.738s (initrd) + 5.731s (userspace) = 10.155s.
Oct 10 09:04:58 np0005479822.novalocal chronyd[795]: Selected source 45.61.49.156 (2.centos.pool.ntp.org)
Oct 10 09:04:58 np0005479822.novalocal chronyd[795]: System clock TAI offset set to 37 seconds
Oct 10 09:05:03 np0005479822.novalocal irqbalance[780]: Cannot change IRQ 25 affinity: Operation not permitted
Oct 10 09:05:03 np0005479822.novalocal irqbalance[780]: IRQ 25 affinity is now unmanaged
Oct 10 09:05:03 np0005479822.novalocal irqbalance[780]: Cannot change IRQ 31 affinity: Operation not permitted
Oct 10 09:05:03 np0005479822.novalocal irqbalance[780]: IRQ 31 affinity is now unmanaged
Oct 10 09:05:03 np0005479822.novalocal irqbalance[780]: Cannot change IRQ 28 affinity: Operation not permitted
Oct 10 09:05:03 np0005479822.novalocal irqbalance[780]: IRQ 28 affinity is now unmanaged
Oct 10 09:05:03 np0005479822.novalocal irqbalance[780]: Cannot change IRQ 32 affinity: Operation not permitted
Oct 10 09:05:03 np0005479822.novalocal irqbalance[780]: IRQ 32 affinity is now unmanaged
Oct 10 09:05:03 np0005479822.novalocal irqbalance[780]: Cannot change IRQ 30 affinity: Operation not permitted
Oct 10 09:05:03 np0005479822.novalocal irqbalance[780]: IRQ 30 affinity is now unmanaged
Oct 10 09:05:03 np0005479822.novalocal irqbalance[780]: Cannot change IRQ 29 affinity: Operation not permitted
Oct 10 09:05:03 np0005479822.novalocal irqbalance[780]: IRQ 29 affinity is now unmanaged
Oct 10 09:05:03 np0005479822.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 10 09:05:19 np0005479822.novalocal sshd-session[1055]: Accepted publickey for zuul from 38.102.83.114 port 53272 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Oct 10 09:05:19 np0005479822.novalocal systemd[1]: Created slice User Slice of UID 1000.
Oct 10 09:05:19 np0005479822.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct 10 09:05:19 np0005479822.novalocal systemd-logind[789]: New session 1 of user zuul.
Oct 10 09:05:19 np0005479822.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct 10 09:05:19 np0005479822.novalocal systemd[1]: Starting User Manager for UID 1000...
Oct 10 09:05:19 np0005479822.novalocal systemd[1059]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:05:19 np0005479822.novalocal systemd[1059]: Queued start job for default target Main User Target.
Oct 10 09:05:19 np0005479822.novalocal systemd[1059]: Created slice User Application Slice.
Oct 10 09:05:19 np0005479822.novalocal systemd[1059]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 10 09:05:19 np0005479822.novalocal systemd[1059]: Started Daily Cleanup of User's Temporary Directories.
Oct 10 09:05:19 np0005479822.novalocal systemd[1059]: Reached target Paths.
Oct 10 09:05:19 np0005479822.novalocal systemd[1059]: Reached target Timers.
Oct 10 09:05:19 np0005479822.novalocal systemd[1059]: Starting D-Bus User Message Bus Socket...
Oct 10 09:05:19 np0005479822.novalocal systemd[1059]: Starting Create User's Volatile Files and Directories...
Oct 10 09:05:19 np0005479822.novalocal systemd[1059]: Finished Create User's Volatile Files and Directories.
Oct 10 09:05:19 np0005479822.novalocal systemd[1059]: Listening on D-Bus User Message Bus Socket.
Oct 10 09:05:19 np0005479822.novalocal systemd[1059]: Reached target Sockets.
Oct 10 09:05:19 np0005479822.novalocal systemd[1059]: Reached target Basic System.
Oct 10 09:05:19 np0005479822.novalocal systemd[1059]: Reached target Main User Target.
Oct 10 09:05:19 np0005479822.novalocal systemd[1059]: Startup finished in 151ms.
Oct 10 09:05:19 np0005479822.novalocal systemd[1]: Started User Manager for UID 1000.
Oct 10 09:05:19 np0005479822.novalocal systemd[1]: Started Session 1 of User zuul.
Oct 10 09:05:19 np0005479822.novalocal sshd-session[1055]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:05:20 np0005479822.novalocal python3[1142]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:05:23 np0005479822.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 10 09:05:23 np0005479822.novalocal python3[1170]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:05:32 np0005479822.novalocal python3[1230]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:05:33 np0005479822.novalocal python3[1270]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Oct 10 09:05:35 np0005479822.novalocal python3[1296]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDEBkxJ4sw2+DK3cAbafLjRenK6XkRzPrF3EgUC0Qy/9kZ0kuErGkKyCEXRNE93NnKaUfoU9ebcJtP/W0B6xem+P337Yb5eT1d5d0DPlSyJ224O/rNncfiIo6YcMhrWXlb8yWwfHogZqjmOgJoH57cdsVMt26tUmFXzrJ1qEBloCvfoEe/tx8o3aeflIhUQ0zm2bbmhRn09oGRCODyyr02YoJZm5GbMiTb7mz8xvM31PEo8DzS5ti1YMOUi76ojLKIS6hZkIk4sUuSXmOwBoYhmyGjvs8csl/rxfVJq3bV+DFnatOKlFCyjgY0Ed4oCeReEGI6h29najM/8mUzfOeBj0dyWj3N3oOwlewtF5ifTB4JPwfEN1Rx37wbEzN/2Q7MOKzeWDxP2E0trD5ey9oqWFCpRpuJURMiPr+A6h070uR8U8vUNxGtH3vAmkuN+p3w79WF1wzlCmcoC+oSdwETcoOqkD84qkNgYJpVVpboSnwBo/H/aPJuJhs/nYPhz+c= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:35 np0005479822.novalocal python3[1320]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:05:36 np0005479822.novalocal python3[1419]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:05:36 np0005479822.novalocal python3[1490]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760087136.2035913-252-253170561469944/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=bea29065a9ff49468ede17c902a062ce_id_rsa follow=False checksum=6477c55dd7b29e382b0ff49c34043ebcd2bcc305 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:05:37 np0005479822.novalocal python3[1613]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:05:37 np0005479822.novalocal python3[1684]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760087137.1911607-307-264647230127252/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=bea29065a9ff49468ede17c902a062ce_id_rsa.pub follow=False checksum=8b86d6c8317b3a249fa7c3a90607af8e51a186ef backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:05:39 np0005479822.novalocal python3[1732]: ansible-ping Invoked with data=pong
Oct 10 09:05:40 np0005479822.novalocal python3[1756]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:05:42 np0005479822.novalocal python3[1814]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Oct 10 09:05:43 np0005479822.novalocal python3[1846]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:05:44 np0005479822.novalocal python3[1870]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:05:44 np0005479822.novalocal python3[1894]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:05:44 np0005479822.novalocal python3[1918]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:05:44 np0005479822.novalocal python3[1942]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:05:45 np0005479822.novalocal python3[1966]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:05:46 np0005479822.novalocal sudo[1990]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccguhzswsaikknfubzdlputyxioxjbqq ; /usr/bin/python3'
Oct 10 09:05:46 np0005479822.novalocal sudo[1990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:05:46 np0005479822.novalocal python3[1992]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:05:46 np0005479822.novalocal sudo[1990]: pam_unix(sudo:session): session closed for user root
Oct 10 09:05:47 np0005479822.novalocal sudo[2068]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqklkdbsxjrzgdlxrtffsrngeehsevre ; /usr/bin/python3'
Oct 10 09:05:47 np0005479822.novalocal sudo[2068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:05:47 np0005479822.novalocal python3[2070]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:05:47 np0005479822.novalocal sudo[2068]: pam_unix(sudo:session): session closed for user root
Oct 10 09:05:48 np0005479822.novalocal sudo[2141]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldzvrkyhtbyiopyjjsizqpvsdtptdbxa ; /usr/bin/python3'
Oct 10 09:05:48 np0005479822.novalocal sudo[2141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:05:48 np0005479822.novalocal python3[2143]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760087147.2338092-33-274231801842306/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:05:48 np0005479822.novalocal sudo[2141]: pam_unix(sudo:session): session closed for user root
Oct 10 09:05:48 np0005479822.novalocal python3[2191]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:49 np0005479822.novalocal python3[2215]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:49 np0005479822.novalocal python3[2239]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:49 np0005479822.novalocal python3[2263]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:49 np0005479822.novalocal python3[2287]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:50 np0005479822.novalocal python3[2311]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:50 np0005479822.novalocal python3[2335]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:50 np0005479822.novalocal python3[2359]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:51 np0005479822.novalocal python3[2383]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:51 np0005479822.novalocal python3[2407]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:51 np0005479822.novalocal python3[2431]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:52 np0005479822.novalocal python3[2455]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:52 np0005479822.novalocal python3[2479]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:52 np0005479822.novalocal python3[2503]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:52 np0005479822.novalocal python3[2527]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:53 np0005479822.novalocal python3[2551]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:53 np0005479822.novalocal python3[2575]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:53 np0005479822.novalocal python3[2599]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:54 np0005479822.novalocal python3[2623]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:54 np0005479822.novalocal python3[2647]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:54 np0005479822.novalocal python3[2671]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:55 np0005479822.novalocal python3[2695]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:55 np0005479822.novalocal python3[2719]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:55 np0005479822.novalocal python3[2743]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:55 np0005479822.novalocal python3[2767]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:56 np0005479822.novalocal python3[2791]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:05:58 np0005479822.novalocal sudo[2815]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwxlegguwkenpioxogkhuddceutdppno ; /usr/bin/python3'
Oct 10 09:05:58 np0005479822.novalocal sudo[2815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:05:58 np0005479822.novalocal python3[2817]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 10 09:05:58 np0005479822.novalocal systemd[1]: Starting Time & Date Service...
Oct 10 09:05:58 np0005479822.novalocal systemd[1]: Started Time & Date Service.
Oct 10 09:05:58 np0005479822.novalocal systemd-timedated[2819]: Changed time zone to 'UTC' (UTC).
Oct 10 09:05:58 np0005479822.novalocal sudo[2815]: pam_unix(sudo:session): session closed for user root
Oct 10 09:05:58 np0005479822.novalocal sudo[2846]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmjgqtawmfxoiulnydmrjstowrvkoypg ; /usr/bin/python3'
Oct 10 09:05:58 np0005479822.novalocal sudo[2846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:05:58 np0005479822.novalocal python3[2848]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:05:58 np0005479822.novalocal sudo[2846]: pam_unix(sudo:session): session closed for user root
Oct 10 09:05:59 np0005479822.novalocal python3[2924]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:05:59 np0005479822.novalocal python3[2995]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1760087159.1770568-252-19086082499537/source _original_basename=tmpbyh7uw4_ follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:06:00 np0005479822.novalocal python3[3095]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:06:00 np0005479822.novalocal python3[3166]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1760087160.0496268-303-249221800546970/source _original_basename=tmpjq1p3cs3 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:06:01 np0005479822.novalocal sudo[3266]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abgdneobzbmpgnzctihbnplebxpgoujn ; /usr/bin/python3'
Oct 10 09:06:01 np0005479822.novalocal sudo[3266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:06:01 np0005479822.novalocal python3[3268]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:06:01 np0005479822.novalocal sudo[3266]: pam_unix(sudo:session): session closed for user root
Oct 10 09:06:01 np0005479822.novalocal sudo[3339]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uilgsaqmudqykeffezeftlxfbmupuqcx ; /usr/bin/python3'
Oct 10 09:06:01 np0005479822.novalocal sudo[3339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:06:02 np0005479822.novalocal python3[3341]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1760087161.3937972-382-8132268083242/source _original_basename=tmp9xext10y follow=False checksum=6cbe59410b7de8cef4e7b572834f646539a41bfa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:06:02 np0005479822.novalocal sudo[3339]: pam_unix(sudo:session): session closed for user root
Oct 10 09:06:02 np0005479822.novalocal python3[3389]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:06:02 np0005479822.novalocal python3[3415]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:06:03 np0005479822.novalocal irqbalance[780]: Cannot change IRQ 26 affinity: Operation not permitted
Oct 10 09:06:03 np0005479822.novalocal irqbalance[780]: IRQ 26 affinity is now unmanaged
Oct 10 09:06:03 np0005479822.novalocal sudo[3493]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prjpvyskermxxhuhdmxlatwishoeqzij ; /usr/bin/python3'
Oct 10 09:06:03 np0005479822.novalocal sudo[3493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:06:03 np0005479822.novalocal python3[3495]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:06:03 np0005479822.novalocal sudo[3493]: pam_unix(sudo:session): session closed for user root
Oct 10 09:06:03 np0005479822.novalocal sudo[3566]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aifrifgsykkushohxapfuccvxfcyfbwx ; /usr/bin/python3'
Oct 10 09:06:03 np0005479822.novalocal sudo[3566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:06:03 np0005479822.novalocal python3[3568]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1760087163.2094662-453-88093426221127/source _original_basename=tmpysg1iv2o follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:06:03 np0005479822.novalocal sudo[3566]: pam_unix(sudo:session): session closed for user root
Oct 10 09:06:04 np0005479822.novalocal sudo[3617]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqtbvhiydnsoiqsxqvypjtmhtqzvotiy ; /usr/bin/python3'
Oct 10 09:06:04 np0005479822.novalocal sudo[3617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:06:04 np0005479822.novalocal python3[3619]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-80e1-2ccb-00000000001f-1-compute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:06:04 np0005479822.novalocal sudo[3617]: pam_unix(sudo:session): session closed for user root
Oct 10 09:06:05 np0005479822.novalocal python3[3647]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-80e1-2ccb-000000000020-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Oct 10 09:06:06 np0005479822.novalocal python3[3675]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:06:23 np0005479822.novalocal sudo[3699]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urxurbnlzzgpffqydrfwhrecwnnzokzi ; /usr/bin/python3'
Oct 10 09:06:23 np0005479822.novalocal sudo[3699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:06:23 np0005479822.novalocal python3[3701]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:06:23 np0005479822.novalocal sudo[3699]: pam_unix(sudo:session): session closed for user root
Oct 10 09:06:28 np0005479822.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 10 09:07:23 np0005479822.novalocal sshd-session[1069]: Received disconnect from 38.102.83.114 port 53272:11: disconnected by user
Oct 10 09:07:23 np0005479822.novalocal sshd-session[1069]: Disconnected from user zuul 38.102.83.114 port 53272
Oct 10 09:07:23 np0005479822.novalocal sshd-session[1055]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:07:23 np0005479822.novalocal systemd-logind[789]: Session 1 logged out. Waiting for processes to exit.
Oct 10 09:07:27 np0005479822.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct 10 09:07:27 np0005479822.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Oct 10 09:07:27 np0005479822.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Oct 10 09:07:27 np0005479822.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Oct 10 09:07:27 np0005479822.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Oct 10 09:07:27 np0005479822.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Oct 10 09:07:27 np0005479822.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Oct 10 09:07:27 np0005479822.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Oct 10 09:07:27 np0005479822.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Oct 10 09:07:27 np0005479822.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Oct 10 09:07:27 np0005479822.novalocal NetworkManager[857]: <info>  [1760087247.7518] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 10 09:07:27 np0005479822.novalocal systemd-udevd[3704]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 09:07:27 np0005479822.novalocal NetworkManager[857]: <info>  [1760087247.7720] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 09:07:27 np0005479822.novalocal NetworkManager[857]: <info>  [1760087247.7745] settings: (eth1): created default wired connection 'Wired connection 1'
Oct 10 09:07:27 np0005479822.novalocal NetworkManager[857]: <info>  [1760087247.7749] device (eth1): carrier: link connected
Oct 10 09:07:27 np0005479822.novalocal NetworkManager[857]: <info>  [1760087247.7750] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 10 09:07:27 np0005479822.novalocal NetworkManager[857]: <info>  [1760087247.7755] policy: auto-activating connection 'Wired connection 1' (098c32ca-35a5-3746-add5-29d4391ea12b)
Oct 10 09:07:27 np0005479822.novalocal NetworkManager[857]: <info>  [1760087247.7759] device (eth1): Activation: starting connection 'Wired connection 1' (098c32ca-35a5-3746-add5-29d4391ea12b)
Oct 10 09:07:27 np0005479822.novalocal NetworkManager[857]: <info>  [1760087247.7759] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 09:07:27 np0005479822.novalocal NetworkManager[857]: <info>  [1760087247.7762] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 09:07:27 np0005479822.novalocal NetworkManager[857]: <info>  [1760087247.7765] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 09:07:27 np0005479822.novalocal NetworkManager[857]: <info>  [1760087247.7769] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 10 09:07:27 np0005479822.novalocal systemd[1059]: Starting Mark boot as successful...
Oct 10 09:07:27 np0005479822.novalocal systemd[1059]: Finished Mark boot as successful.
Oct 10 09:07:28 np0005479822.novalocal sshd-session[3709]: Accepted publickey for zuul from 38.102.83.114 port 46680 ssh2: RSA SHA256:RwPGCkYG1Mlcunwa9tTlXvLSrYLunSGhwxtMMuIfos4
Oct 10 09:07:28 np0005479822.novalocal systemd-logind[789]: New session 3 of user zuul.
Oct 10 09:07:28 np0005479822.novalocal systemd[1]: Started Session 3 of User zuul.
Oct 10 09:07:28 np0005479822.novalocal sshd-session[3709]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:07:28 np0005479822.novalocal python3[3736]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-dbf0-3472-00000000018f-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:07:38 np0005479822.novalocal sudo[3814]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlgfbtnvetharsphepalmhhzbzxdqcju ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 10 09:07:38 np0005479822.novalocal sudo[3814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:07:39 np0005479822.novalocal python3[3816]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:07:39 np0005479822.novalocal sudo[3814]: pam_unix(sudo:session): session closed for user root
Oct 10 09:07:39 np0005479822.novalocal sudo[3887]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvortgjivfwzdclqfdycwefrmbjnnfsi ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 10 09:07:39 np0005479822.novalocal sudo[3887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:07:39 np0005479822.novalocal python3[3889]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760087258.6499834-155-60949084712063/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=443c8cb365d54d2c1d375a8deb27e5080d25cfce backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:07:39 np0005479822.novalocal sudo[3887]: pam_unix(sudo:session): session closed for user root
Oct 10 09:07:39 np0005479822.novalocal sudo[3937]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atlqytsmivatnbumduloasfrthrffyrj ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 10 09:07:39 np0005479822.novalocal sudo[3937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:07:40 np0005479822.novalocal python3[3939]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 09:07:40 np0005479822.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 10 09:07:40 np0005479822.novalocal systemd[1]: Stopped Network Manager Wait Online.
Oct 10 09:07:40 np0005479822.novalocal systemd[1]: Stopping Network Manager Wait Online...
Oct 10 09:07:40 np0005479822.novalocal systemd[1]: Stopping Network Manager...
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[857]: <info>  [1760087260.1080] caught SIGTERM, shutting down normally.
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[857]: <info>  [1760087260.1100] dhcp4 (eth0): canceled DHCP transaction
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[857]: <info>  [1760087260.1101] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[857]: <info>  [1760087260.1101] dhcp4 (eth0): state changed no lease
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[857]: <info>  [1760087260.1105] manager: NetworkManager state is now CONNECTING
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[857]: <info>  [1760087260.1164] dhcp4 (eth1): canceled DHCP transaction
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[857]: <info>  [1760087260.1164] dhcp4 (eth1): state changed no lease
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[857]: <info>  [1760087260.1225] exiting (success)
Oct 10 09:07:40 np0005479822.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 10 09:07:40 np0005479822.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 10 09:07:40 np0005479822.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 10 09:07:40 np0005479822.novalocal systemd[1]: Stopped Network Manager.
Oct 10 09:07:40 np0005479822.novalocal systemd[1]: NetworkManager.service: Consumed 1.122s CPU time, 10.2M memory peak.
Oct 10 09:07:40 np0005479822.novalocal systemd[1]: Starting Network Manager...
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.1920] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:fb56a8ec-12f4-4a91-b74d-e8ffc8e6ce0c)
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.1924] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.1991] manager[0x5634a5673070]: monitoring kernel firmware directory '/lib/firmware'.
Oct 10 09:07:40 np0005479822.novalocal systemd[1]: Starting Hostname Service...
Oct 10 09:07:40 np0005479822.novalocal systemd[1]: Started Hostname Service.
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.2848] hostname: hostname: using hostnamed
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.2849] hostname: static hostname changed from (none) to "np0005479822.novalocal"
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.2856] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.2864] manager[0x5634a5673070]: rfkill: Wi-Fi hardware radio set enabled
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.2864] manager[0x5634a5673070]: rfkill: WWAN hardware radio set enabled
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.2901] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.2901] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.2902] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.2902] manager: Networking is enabled by state file
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.2905] settings: Loaded settings plugin: keyfile (internal)
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.2910] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.2938] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.2953] dhcp: init: Using DHCP client 'internal'
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.2956] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.2963] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.2970] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.2979] device (lo): Activation: starting connection 'lo' (da285bad-fb13-45e9-93ce-582789837c7a)
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.2987] device (eth0): carrier: link connected
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.2992] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.2998] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.2998] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.3007] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.3017] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.3027] device (eth1): carrier: link connected
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.3032] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.3038] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (098c32ca-35a5-3746-add5-29d4391ea12b) (indicated)
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.3039] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.3046] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.3056] device (eth1): Activation: starting connection 'Wired connection 1' (098c32ca-35a5-3746-add5-29d4391ea12b)
Oct 10 09:07:40 np0005479822.novalocal systemd[1]: Started Network Manager.
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.3067] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.3075] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.3077] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.3080] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.3084] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.3087] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.3090] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.3093] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.3104] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.3111] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.3114] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.3129] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.3136] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.3158] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.3166] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.3173] device (lo): Activation: successful, device activated.
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.3183] dhcp4 (eth0): state changed new lease, address=38.102.83.20
Oct 10 09:07:40 np0005479822.novalocal systemd[1]: Starting Network Manager Wait Online...
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.3189] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.3265] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.3298] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.3300] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.3304] manager: NetworkManager state is now CONNECTED_SITE
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.3307] device (eth0): Activation: successful, device activated.
Oct 10 09:07:40 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087260.3321] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 10 09:07:40 np0005479822.novalocal sudo[3937]: pam_unix(sudo:session): session closed for user root
Oct 10 09:07:40 np0005479822.novalocal python3[4023]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-dbf0-3472-0000000000c8-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:07:50 np0005479822.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 10 09:08:10 np0005479822.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 10 09:08:25 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087305.2351] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 10 09:08:25 np0005479822.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 10 09:08:25 np0005479822.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 10 09:08:25 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087305.2577] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 10 09:08:25 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087305.2581] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 10 09:08:25 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087305.2591] device (eth1): Activation: successful, device activated.
Oct 10 09:08:25 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087305.2599] manager: startup complete
Oct 10 09:08:25 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087305.2601] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Oct 10 09:08:25 np0005479822.novalocal NetworkManager[3951]: <warn>  [1760087305.2607] device (eth1): Activation: failed for connection 'Wired connection 1'
Oct 10 09:08:25 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087305.2627] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Oct 10 09:08:25 np0005479822.novalocal systemd[1]: Finished Network Manager Wait Online.
Oct 10 09:08:25 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087305.2714] dhcp4 (eth1): canceled DHCP transaction
Oct 10 09:08:25 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087305.2714] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct 10 09:08:25 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087305.2714] dhcp4 (eth1): state changed no lease
Oct 10 09:08:25 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087305.2740] policy: auto-activating connection 'ci-private-network' (8d1fd0d1-71da-5534-9141-6178f63cc684)
Oct 10 09:08:25 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087305.2749] device (eth1): Activation: starting connection 'ci-private-network' (8d1fd0d1-71da-5534-9141-6178f63cc684)
Oct 10 09:08:25 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087305.2751] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 09:08:25 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087305.2755] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 09:08:25 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087305.2765] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 09:08:25 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087305.2777] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 09:08:25 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087305.2824] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 09:08:25 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087305.2827] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 09:08:25 np0005479822.novalocal NetworkManager[3951]: <info>  [1760087305.2837] device (eth1): Activation: successful, device activated.
Oct 10 09:08:35 np0005479822.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 10 09:08:40 np0005479822.novalocal sshd-session[3712]: Received disconnect from 38.102.83.114 port 46680:11: disconnected by user
Oct 10 09:08:40 np0005479822.novalocal sshd-session[3712]: Disconnected from user zuul 38.102.83.114 port 46680
Oct 10 09:08:40 np0005479822.novalocal sshd-session[3709]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:08:40 np0005479822.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Oct 10 09:08:40 np0005479822.novalocal systemd[1]: session-3.scope: Consumed 1.829s CPU time.
Oct 10 09:08:40 np0005479822.novalocal systemd-logind[789]: Session 3 logged out. Waiting for processes to exit.
Oct 10 09:08:40 np0005479822.novalocal systemd-logind[789]: Removed session 3.
Oct 10 09:09:18 np0005479822.novalocal sshd-session[4052]: Accepted publickey for zuul from 38.102.83.114 port 50170 ssh2: RSA SHA256:RwPGCkYG1Mlcunwa9tTlXvLSrYLunSGhwxtMMuIfos4
Oct 10 09:09:18 np0005479822.novalocal systemd-logind[789]: New session 4 of user zuul.
Oct 10 09:09:18 np0005479822.novalocal systemd[1]: Started Session 4 of User zuul.
Oct 10 09:09:18 np0005479822.novalocal sshd-session[4052]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:09:18 np0005479822.novalocal sudo[4131]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pblycnhzscciilfcxeiucajambpkhbxu ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 10 09:09:18 np0005479822.novalocal sudo[4131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:09:19 np0005479822.novalocal python3[4133]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:09:19 np0005479822.novalocal sudo[4131]: pam_unix(sudo:session): session closed for user root
Oct 10 09:09:19 np0005479822.novalocal sudo[4204]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gefpeqzssrmadnogijzvxhojzspuhuqb ; OS_CLOUD=vexxhost /usr/bin/python3'
Oct 10 09:09:19 np0005479822.novalocal sudo[4204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:09:19 np0005479822.novalocal python3[4206]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760087358.7119231-373-218214675868800/source _original_basename=tmpgwmafwp5 follow=False checksum=0edcb8668707f95c4678608a04fc39cdafb654ec backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:09:19 np0005479822.novalocal sudo[4204]: pam_unix(sudo:session): session closed for user root
Oct 10 09:09:23 np0005479822.novalocal sshd-session[4055]: Connection closed by 38.102.83.114 port 50170
Oct 10 09:09:23 np0005479822.novalocal sshd-session[4052]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:09:23 np0005479822.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Oct 10 09:09:23 np0005479822.novalocal systemd-logind[789]: Session 4 logged out. Waiting for processes to exit.
Oct 10 09:09:23 np0005479822.novalocal systemd-logind[789]: Removed session 4.
Oct 10 09:10:41 np0005479822.novalocal systemd[1059]: Created slice User Background Tasks Slice.
Oct 10 09:10:41 np0005479822.novalocal systemd[1059]: Starting Cleanup of User's Temporary Files and Directories...
Oct 10 09:10:41 np0005479822.novalocal systemd[1059]: Finished Cleanup of User's Temporary Files and Directories.
Oct 10 09:15:52 np0005479822.novalocal sshd-session[4237]: Accepted publickey for zuul from 38.102.83.114 port 42896 ssh2: RSA SHA256:RwPGCkYG1Mlcunwa9tTlXvLSrYLunSGhwxtMMuIfos4
Oct 10 09:15:52 np0005479822.novalocal systemd-logind[789]: New session 5 of user zuul.
Oct 10 09:15:52 np0005479822.novalocal systemd[1]: Started Session 5 of User zuul.
Oct 10 09:15:52 np0005479822.novalocal sshd-session[4237]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:15:52 np0005479822.novalocal sudo[4264]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdbfkoysdhjossemepelwuvagtqfmerm ; /usr/bin/python3'
Oct 10 09:15:52 np0005479822.novalocal sudo[4264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:15:52 np0005479822.novalocal python3[4266]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-305a-504c-000000001cfe-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:15:52 np0005479822.novalocal sudo[4264]: pam_unix(sudo:session): session closed for user root
Oct 10 09:15:53 np0005479822.novalocal sudo[4293]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suklsthmwtomrbwvmbhcmogpfffxodnk ; /usr/bin/python3'
Oct 10 09:15:53 np0005479822.novalocal sudo[4293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:15:53 np0005479822.novalocal python3[4295]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:15:53 np0005479822.novalocal sudo[4293]: pam_unix(sudo:session): session closed for user root
Oct 10 09:15:53 np0005479822.novalocal sudo[4319]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drfjdtvfkullrkljoqeswjdasooueedl ; /usr/bin/python3'
Oct 10 09:15:53 np0005479822.novalocal sudo[4319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:15:53 np0005479822.novalocal python3[4321]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:15:53 np0005479822.novalocal sudo[4319]: pam_unix(sudo:session): session closed for user root
Oct 10 09:15:53 np0005479822.novalocal sudo[4345]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrsboairatvpqywdqbyquxrlhwwfblwa ; /usr/bin/python3'
Oct 10 09:15:53 np0005479822.novalocal sudo[4345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:15:53 np0005479822.novalocal python3[4347]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:15:53 np0005479822.novalocal sudo[4345]: pam_unix(sudo:session): session closed for user root
Oct 10 09:15:53 np0005479822.novalocal sudo[4371]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unizbhewqdilljhblxwbmdlwzdptdbif ; /usr/bin/python3'
Oct 10 09:15:53 np0005479822.novalocal sudo[4371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:15:54 np0005479822.novalocal python3[4373]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:15:54 np0005479822.novalocal sudo[4371]: pam_unix(sudo:session): session closed for user root
Oct 10 09:15:54 np0005479822.novalocal sudo[4397]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgcwzqhxxvrrbrcbuaezcrdqoyggpiyc ; /usr/bin/python3'
Oct 10 09:15:54 np0005479822.novalocal sudo[4397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:15:54 np0005479822.novalocal python3[4399]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:15:54 np0005479822.novalocal python3[4399]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Oct 10 09:15:54 np0005479822.novalocal sudo[4397]: pam_unix(sudo:session): session closed for user root
Oct 10 09:15:55 np0005479822.novalocal sudo[4423]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wefamrlrfpceivenbocqzazswjbkciuv ; /usr/bin/python3'
Oct 10 09:15:55 np0005479822.novalocal sudo[4423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:15:55 np0005479822.novalocal python3[4425]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 09:15:55 np0005479822.novalocal systemd[1]: Reloading.
Oct 10 09:15:55 np0005479822.novalocal systemd-rc-local-generator[4447]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:15:55 np0005479822.novalocal sudo[4423]: pam_unix(sudo:session): session closed for user root
Oct 10 09:15:56 np0005479822.novalocal sudo[4478]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpjvpwswamdeqxrzyygiufvqymzmvwnk ; /usr/bin/python3'
Oct 10 09:15:56 np0005479822.novalocal sudo[4478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:15:57 np0005479822.novalocal python3[4480]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Oct 10 09:15:57 np0005479822.novalocal sudo[4478]: pam_unix(sudo:session): session closed for user root
Oct 10 09:15:57 np0005479822.novalocal sudo[4504]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eocpcyyzsgeiafocgzeszoomxxmhuvmr ; /usr/bin/python3'
Oct 10 09:15:57 np0005479822.novalocal sudo[4504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:15:57 np0005479822.novalocal python3[4506]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:15:57 np0005479822.novalocal sudo[4504]: pam_unix(sudo:session): session closed for user root
Oct 10 09:15:57 np0005479822.novalocal sudo[4532]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mudoircceddntpqekujvcuvlonxfyjps ; /usr/bin/python3'
Oct 10 09:15:57 np0005479822.novalocal sudo[4532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:15:57 np0005479822.novalocal python3[4534]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:15:57 np0005479822.novalocal sudo[4532]: pam_unix(sudo:session): session closed for user root
Oct 10 09:15:57 np0005479822.novalocal sudo[4560]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzcmgwsgrycieahztzrcugplhpxchyly ; /usr/bin/python3'
Oct 10 09:15:57 np0005479822.novalocal sudo[4560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:15:58 np0005479822.novalocal python3[4562]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:15:58 np0005479822.novalocal sudo[4560]: pam_unix(sudo:session): session closed for user root
Oct 10 09:15:58 np0005479822.novalocal sudo[4588]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzemrrsszqzadaotrtnnaxmnjgraxxmf ; /usr/bin/python3'
Oct 10 09:15:58 np0005479822.novalocal sudo[4588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:15:58 np0005479822.novalocal python3[4590]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:15:58 np0005479822.novalocal sudo[4588]: pam_unix(sudo:session): session closed for user root
Oct 10 09:15:59 np0005479822.novalocal python3[4617]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-305a-504c-000000001d04-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:15:59 np0005479822.novalocal python3[4647]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:16:02 np0005479822.novalocal sshd-session[4240]: Connection closed by 38.102.83.114 port 42896
Oct 10 09:16:02 np0005479822.novalocal sshd-session[4237]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:16:02 np0005479822.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Oct 10 09:16:02 np0005479822.novalocal systemd[1]: session-5.scope: Consumed 3.745s CPU time.
Oct 10 09:16:02 np0005479822.novalocal systemd-logind[789]: Session 5 logged out. Waiting for processes to exit.
Oct 10 09:16:02 np0005479822.novalocal systemd-logind[789]: Removed session 5.
Oct 10 09:16:04 np0005479822.novalocal sshd-session[4654]: Accepted publickey for zuul from 38.102.83.114 port 48650 ssh2: RSA SHA256:RwPGCkYG1Mlcunwa9tTlXvLSrYLunSGhwxtMMuIfos4
Oct 10 09:16:04 np0005479822.novalocal systemd-logind[789]: New session 6 of user zuul.
Oct 10 09:16:04 np0005479822.novalocal systemd[1]: Started Session 6 of User zuul.
Oct 10 09:16:04 np0005479822.novalocal sshd-session[4654]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:16:04 np0005479822.novalocal sudo[4681]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftfqkpeeikrosxjdvmjrdlnrgjoyoxou ; /usr/bin/python3'
Oct 10 09:16:04 np0005479822.novalocal sudo[4681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:16:04 np0005479822.novalocal python3[4683]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 10 09:16:19 np0005479822.novalocal kernel: SELinux:  Converting 363 SID table entries...
Oct 10 09:16:19 np0005479822.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 09:16:19 np0005479822.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 10 09:16:19 np0005479822.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 09:16:19 np0005479822.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 10 09:16:19 np0005479822.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 09:16:19 np0005479822.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 09:16:19 np0005479822.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 09:16:28 np0005479822.novalocal kernel: SELinux:  Converting 363 SID table entries...
Oct 10 09:16:28 np0005479822.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 09:16:28 np0005479822.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 10 09:16:28 np0005479822.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 09:16:28 np0005479822.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 10 09:16:28 np0005479822.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 09:16:28 np0005479822.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 09:16:28 np0005479822.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 09:16:36 np0005479822.novalocal kernel: SELinux:  Converting 363 SID table entries...
Oct 10 09:16:36 np0005479822.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 09:16:36 np0005479822.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 10 09:16:36 np0005479822.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 09:16:36 np0005479822.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 10 09:16:36 np0005479822.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 09:16:36 np0005479822.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 09:16:36 np0005479822.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 09:16:38 np0005479822.novalocal setsebool[4753]: The virt_use_nfs policy boolean was changed to 1 by root
Oct 10 09:16:38 np0005479822.novalocal setsebool[4753]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Oct 10 09:16:50 np0005479822.novalocal kernel: SELinux:  Converting 366 SID table entries...
Oct 10 09:16:50 np0005479822.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 09:16:50 np0005479822.novalocal kernel: SELinux:  policy capability open_perms=1
Oct 10 09:16:50 np0005479822.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 09:16:50 np0005479822.novalocal kernel: SELinux:  policy capability always_check_network=0
Oct 10 09:16:50 np0005479822.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 09:16:50 np0005479822.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 09:16:50 np0005479822.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 09:17:08 np0005479822.novalocal dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 10 09:17:08 np0005479822.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 10 09:17:08 np0005479822.novalocal systemd[1]: Starting man-db-cache-update.service...
Oct 10 09:17:08 np0005479822.novalocal systemd[1]: Reloading.
Oct 10 09:17:09 np0005479822.novalocal systemd-rc-local-generator[5509]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:17:09 np0005479822.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Oct 10 09:17:09 np0005479822.novalocal systemd[1]: Starting PackageKit Daemon...
Oct 10 09:17:09 np0005479822.novalocal PackageKit[6258]: daemon start
Oct 10 09:17:09 np0005479822.novalocal systemd[1]: Starting Authorization Manager...
Oct 10 09:17:10 np0005479822.novalocal polkitd[6374]: Started polkitd version 0.117
Oct 10 09:17:10 np0005479822.novalocal polkitd[6374]: Loading rules from directory /etc/polkit-1/rules.d
Oct 10 09:17:10 np0005479822.novalocal polkitd[6374]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 10 09:17:10 np0005479822.novalocal polkitd[6374]: Finished loading, compiling and executing 3 rules
Oct 10 09:17:10 np0005479822.novalocal polkitd[6374]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Oct 10 09:17:10 np0005479822.novalocal systemd[1]: Started Authorization Manager.
Oct 10 09:17:10 np0005479822.novalocal systemd[1]: Started PackageKit Daemon.
Oct 10 09:17:10 np0005479822.novalocal sudo[4681]: pam_unix(sudo:session): session closed for user root
Oct 10 09:17:41 np0005479822.novalocal python3[18088]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163efc-24cc-c8da-0a8f-00000000000c-1-compute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:17:42 np0005479822.novalocal kernel: evm: overlay not supported
Oct 10 09:17:42 np0005479822.novalocal systemd[1059]: Starting D-Bus User Message Bus...
Oct 10 09:17:42 np0005479822.novalocal dbus-broker-launch[18506]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct 10 09:17:42 np0005479822.novalocal dbus-broker-launch[18506]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct 10 09:17:42 np0005479822.novalocal systemd[1059]: Started D-Bus User Message Bus.
Oct 10 09:17:42 np0005479822.novalocal dbus-broker-lau[18506]: Ready
Oct 10 09:17:42 np0005479822.novalocal systemd[1059]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 10 09:17:42 np0005479822.novalocal systemd[1059]: Created slice Slice /user.
Oct 10 09:17:42 np0005479822.novalocal systemd[1059]: podman-18441.scope: unit configures an IP firewall, but not running as root.
Oct 10 09:17:42 np0005479822.novalocal systemd[1059]: (This warning is only shown for the first unit using IP firewalling.)
Oct 10 09:17:42 np0005479822.novalocal systemd[1059]: Started podman-18441.scope.
Oct 10 09:17:42 np0005479822.novalocal systemd[1059]: Started podman-pause-ba406518.scope.
Oct 10 09:17:43 np0005479822.novalocal sshd-session[4657]: Connection closed by 38.102.83.114 port 48650
Oct 10 09:17:43 np0005479822.novalocal sshd-session[4654]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:17:43 np0005479822.novalocal systemd[1]: session-6.scope: Deactivated successfully.
Oct 10 09:17:43 np0005479822.novalocal systemd[1]: session-6.scope: Consumed 1min 1.661s CPU time.
Oct 10 09:17:43 np0005479822.novalocal systemd-logind[789]: Session 6 logged out. Waiting for processes to exit.
Oct 10 09:17:43 np0005479822.novalocal systemd-logind[789]: Removed session 6.
Oct 10 09:17:56 np0005479822.novalocal sshd-session[23397]: Unable to negotiate with 38.102.83.82 port 56378: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Oct 10 09:17:56 np0005479822.novalocal sshd-session[23399]: Unable to negotiate with 38.102.83.82 port 56354: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Oct 10 09:17:56 np0005479822.novalocal sshd-session[23395]: Unable to negotiate with 38.102.83.82 port 56362: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Oct 10 09:17:56 np0005479822.novalocal sshd-session[23394]: Connection closed by 38.102.83.82 port 56332 [preauth]
Oct 10 09:17:56 np0005479822.novalocal sshd-session[23398]: Connection closed by 38.102.83.82 port 56348 [preauth]
Oct 10 09:18:01 np0005479822.novalocal sshd-session[24742]: Accepted publickey for zuul from 38.102.83.114 port 37666 ssh2: RSA SHA256:RwPGCkYG1Mlcunwa9tTlXvLSrYLunSGhwxtMMuIfos4
Oct 10 09:18:01 np0005479822.novalocal systemd-logind[789]: New session 7 of user zuul.
Oct 10 09:18:01 np0005479822.novalocal systemd[1]: Started Session 7 of User zuul.
Oct 10 09:18:01 np0005479822.novalocal sshd-session[24742]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:18:01 np0005479822.novalocal python3[24837]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLKa/9QXUogxywf992nox1ioEGXyzZloryP7qu5KhbNyvfDQXbxckfHpSRrx2tURERGS47wcXt32qRf5GMN12x0= zuul@np0005479820.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:18:01 np0005479822.novalocal sudo[24968]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dprimmsnddbvgtionbagdrcfredpyygi ; /usr/bin/python3'
Oct 10 09:18:01 np0005479822.novalocal sudo[24968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:18:01 np0005479822.novalocal python3[24980]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLKa/9QXUogxywf992nox1ioEGXyzZloryP7qu5KhbNyvfDQXbxckfHpSRrx2tURERGS47wcXt32qRf5GMN12x0= zuul@np0005479820.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:18:01 np0005479822.novalocal sudo[24968]: pam_unix(sudo:session): session closed for user root
Oct 10 09:18:02 np0005479822.novalocal sudo[25254]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izrsnubinktsuckuewrxkqxipteacngo ; /usr/bin/python3'
Oct 10 09:18:02 np0005479822.novalocal sudo[25254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:18:02 np0005479822.novalocal python3[25264]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005479822.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Oct 10 09:18:02 np0005479822.novalocal useradd[25325]: new group: name=cloud-admin, GID=1002
Oct 10 09:18:02 np0005479822.novalocal useradd[25325]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Oct 10 09:18:02 np0005479822.novalocal sudo[25254]: pam_unix(sudo:session): session closed for user root
Oct 10 09:18:03 np0005479822.novalocal sudo[25457]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bobeiqneryarbmzdlrcqwvdydztxroth ; /usr/bin/python3'
Oct 10 09:18:03 np0005479822.novalocal sudo[25457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:18:03 np0005479822.novalocal python3[25468]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLKa/9QXUogxywf992nox1ioEGXyzZloryP7qu5KhbNyvfDQXbxckfHpSRrx2tURERGS47wcXt32qRf5GMN12x0= zuul@np0005479820.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct 10 09:18:03 np0005479822.novalocal sudo[25457]: pam_unix(sudo:session): session closed for user root
Oct 10 09:18:03 np0005479822.novalocal sudo[25703]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tixvvmpgsdtxatenxlraobhsbqbhgltx ; /usr/bin/python3'
Oct 10 09:18:03 np0005479822.novalocal sudo[25703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:18:03 np0005479822.novalocal python3[25710]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:18:03 np0005479822.novalocal sudo[25703]: pam_unix(sudo:session): session closed for user root
Oct 10 09:18:04 np0005479822.novalocal sudo[25926]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qimzccajquypirqddmqpegzcshkewlhk ; /usr/bin/python3'
Oct 10 09:18:04 np0005479822.novalocal sudo[25926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:18:04 np0005479822.novalocal python3[25932]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760087883.4861338-151-36587341816073/source _original_basename=tmp22ygpsbj follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:18:04 np0005479822.novalocal sudo[25926]: pam_unix(sudo:session): session closed for user root
Oct 10 09:18:04 np0005479822.novalocal sudo[26235]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzgdbttgogdasbzdkxykdyvmxctytnby ; /usr/bin/python3'
Oct 10 09:18:04 np0005479822.novalocal sudo[26235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:18:05 np0005479822.novalocal python3[26247]: ansible-ansible.builtin.hostname Invoked with name=compute-1 use=systemd
Oct 10 09:18:05 np0005479822.novalocal systemd[1]: Starting Hostname Service...
Oct 10 09:18:05 np0005479822.novalocal systemd[1]: Started Hostname Service.
Oct 10 09:18:05 np0005479822.novalocal systemd-hostnamed[26348]: Changed pretty hostname to 'compute-1'
Oct 10 09:18:05 compute-1 systemd-hostnamed[26348]: Hostname set to <compute-1> (static)
Oct 10 09:18:05 compute-1 NetworkManager[3951]: <info>  [1760087885.3177] hostname: static hostname changed from "np0005479822.novalocal" to "compute-1"
Oct 10 09:18:05 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 10 09:18:05 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 10 09:18:05 compute-1 sudo[26235]: pam_unix(sudo:session): session closed for user root
Oct 10 09:18:05 compute-1 sshd-session[24788]: Connection closed by 38.102.83.114 port 37666
Oct 10 09:18:05 compute-1 sshd-session[24742]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:18:05 compute-1 systemd[1]: session-7.scope: Deactivated successfully.
Oct 10 09:18:05 compute-1 systemd[1]: session-7.scope: Consumed 2.600s CPU time.
Oct 10 09:18:05 compute-1 systemd-logind[789]: Session 7 logged out. Waiting for processes to exit.
Oct 10 09:18:05 compute-1 systemd-logind[789]: Removed session 7.
Oct 10 09:18:05 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 10 09:18:05 compute-1 systemd[1]: Finished man-db-cache-update.service.
Oct 10 09:18:05 compute-1 systemd[1]: man-db-cache-update.service: Consumed 1min 9.352s CPU time.
Oct 10 09:18:05 compute-1 systemd[1]: run-r7d12f6832ac24b84b90a692731cf39e8.service: Deactivated successfully.
Oct 10 09:18:15 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 10 09:18:35 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 10 09:20:31 compute-1 systemd[1]: Starting Cleanup of Temporary Directories...
Oct 10 09:20:31 compute-1 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct 10 09:20:31 compute-1 systemd[1]: Finished Cleanup of Temporary Directories.
Oct 10 09:20:31 compute-1 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct 10 09:21:41 compute-1 sshd-session[26531]: Accepted publickey for zuul from 38.102.83.82 port 55570 ssh2: RSA SHA256:RwPGCkYG1Mlcunwa9tTlXvLSrYLunSGhwxtMMuIfos4
Oct 10 09:21:41 compute-1 systemd-logind[789]: New session 8 of user zuul.
Oct 10 09:21:41 compute-1 systemd[1]: Started Session 8 of User zuul.
Oct 10 09:21:41 compute-1 sshd-session[26531]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:21:41 compute-1 python3[26607]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:21:43 compute-1 sudo[26721]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-claahlavuycridgsnapjmphlebpqbzrg ; /usr/bin/python3'
Oct 10 09:21:43 compute-1 sudo[26721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:21:43 compute-1 python3[26723]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:21:43 compute-1 sudo[26721]: pam_unix(sudo:session): session closed for user root
Oct 10 09:21:43 compute-1 sudo[26794]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmjndilabdyulqexmhrqwyleccfjogja ; /usr/bin/python3'
Oct 10 09:21:43 compute-1 sudo[26794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:21:44 compute-1 python3[26796]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760088103.3429818-30670-248067038063451/source mode=0755 _original_basename=delorean.repo follow=False checksum=c02c26d38f431b15f6463fc53c3d93ed5138ff07 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:21:44 compute-1 sudo[26794]: pam_unix(sudo:session): session closed for user root
Oct 10 09:21:44 compute-1 sudo[26820]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbowexfevmgamglenrivjvrlntsuxlya ; /usr/bin/python3'
Oct 10 09:21:44 compute-1 sudo[26820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:21:44 compute-1 python3[26822]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:21:44 compute-1 sudo[26820]: pam_unix(sudo:session): session closed for user root
Oct 10 09:21:44 compute-1 sudo[26893]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxmfhigycntwafowlprbaysodtfbmzyd ; /usr/bin/python3'
Oct 10 09:21:44 compute-1 sudo[26893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:21:44 compute-1 python3[26895]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760088103.3429818-30670-248067038063451/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:21:44 compute-1 sudo[26893]: pam_unix(sudo:session): session closed for user root
Oct 10 09:21:44 compute-1 sudo[26919]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbvcvsjkndccuxerjfbizxhwmykklznt ; /usr/bin/python3'
Oct 10 09:21:44 compute-1 sudo[26919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:21:44 compute-1 python3[26921]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:21:44 compute-1 sudo[26919]: pam_unix(sudo:session): session closed for user root
Oct 10 09:21:45 compute-1 sudo[26992]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-httcrsxokqjpaxutqexlcrmxulhrowtq ; /usr/bin/python3'
Oct 10 09:21:45 compute-1 sudo[26992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:21:45 compute-1 python3[26994]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760088103.3429818-30670-248067038063451/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:21:45 compute-1 sudo[26992]: pam_unix(sudo:session): session closed for user root
Oct 10 09:21:45 compute-1 sudo[27018]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flvljlsnwmzfvmsvfqzktuypimxqeqmc ; /usr/bin/python3'
Oct 10 09:21:45 compute-1 sudo[27018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:21:45 compute-1 python3[27020]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:21:45 compute-1 sudo[27018]: pam_unix(sudo:session): session closed for user root
Oct 10 09:21:45 compute-1 sudo[27091]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxyxsroegznnvdvexolqcruroptyzaqt ; /usr/bin/python3'
Oct 10 09:21:45 compute-1 sudo[27091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:21:45 compute-1 python3[27093]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760088103.3429818-30670-248067038063451/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:21:45 compute-1 sudo[27091]: pam_unix(sudo:session): session closed for user root
Oct 10 09:21:46 compute-1 sudo[27117]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vurriajlwliairwylowagrvxjdebmfuq ; /usr/bin/python3'
Oct 10 09:21:46 compute-1 sudo[27117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:21:46 compute-1 python3[27119]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:21:46 compute-1 sudo[27117]: pam_unix(sudo:session): session closed for user root
Oct 10 09:21:46 compute-1 sudo[27190]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgbooznvakbuuvhgaxvbkiltknchuqql ; /usr/bin/python3'
Oct 10 09:21:46 compute-1 sudo[27190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:21:46 compute-1 python3[27192]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760088103.3429818-30670-248067038063451/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:21:46 compute-1 sudo[27190]: pam_unix(sudo:session): session closed for user root
Oct 10 09:21:46 compute-1 sudo[27216]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckobrrpktgkjtxobwomvltrlxspbqxib ; /usr/bin/python3'
Oct 10 09:21:46 compute-1 sudo[27216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:21:46 compute-1 python3[27218]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:21:46 compute-1 sudo[27216]: pam_unix(sudo:session): session closed for user root
Oct 10 09:21:47 compute-1 sudo[27289]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvfypccnizoaemsjihqskdfmzatmepek ; /usr/bin/python3'
Oct 10 09:21:47 compute-1 sudo[27289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:21:47 compute-1 python3[27292]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760088103.3429818-30670-248067038063451/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:21:47 compute-1 sudo[27289]: pam_unix(sudo:session): session closed for user root
Oct 10 09:21:47 compute-1 sudo[27316]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfunuyejgaglfvxajpnrvldrcrgalbug ; /usr/bin/python3'
Oct 10 09:21:47 compute-1 sudo[27316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:21:47 compute-1 python3[27318]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:21:47 compute-1 sudo[27316]: pam_unix(sudo:session): session closed for user root
Oct 10 09:21:47 compute-1 sudo[27389]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzovantcnpnzcezvyaqvvxknzjtnkaag ; /usr/bin/python3'
Oct 10 09:21:47 compute-1 sudo[27389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:21:48 compute-1 python3[27391]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760088103.3429818-30670-248067038063451/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=75ca8f9fe9a538824fd094f239c30e8ce8652e8a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:21:48 compute-1 sudo[27389]: pam_unix(sudo:session): session closed for user root
Oct 10 09:21:53 compute-1 irqbalance[780]: Cannot change IRQ 27 affinity: Operation not permitted
Oct 10 09:21:53 compute-1 irqbalance[780]: IRQ 27 affinity is now unmanaged
Oct 10 09:21:59 compute-1 python3[27439]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:22:15 compute-1 PackageKit[6258]: daemon quit
Oct 10 09:22:15 compute-1 systemd[1]: packagekit.service: Deactivated successfully.
Oct 10 09:24:32 compute-1 sshd-session[27444]: Connection closed by 194.32.87.93 port 55558
Oct 10 09:24:33 compute-1 sshd-session[27445]: Connection closed by authenticating user root 194.32.87.93 port 55732 [preauth]
Oct 10 09:24:34 compute-1 sshd-session[27447]: Invalid user admin from 194.32.87.93 port 56362
Oct 10 09:24:34 compute-1 sshd-session[27447]: Connection closed by invalid user admin 194.32.87.93 port 56362 [preauth]
Oct 10 09:24:35 compute-1 sshd-session[27449]: Invalid user kali from 194.32.87.93 port 56958
Oct 10 09:24:35 compute-1 sshd-session[27449]: Connection closed by invalid user kali 194.32.87.93 port 56958 [preauth]
Oct 10 09:24:36 compute-1 sshd-session[27451]: Invalid user minecraft from 194.32.87.93 port 57484
Oct 10 09:24:36 compute-1 sshd-session[27451]: Connection closed by invalid user minecraft 194.32.87.93 port 57484 [preauth]
Oct 10 09:24:36 compute-1 sshd-session[27453]: Invalid user guestuser from 194.32.87.93 port 58004
Oct 10 09:24:37 compute-1 sshd-session[27453]: Connection closed by invalid user guestuser 194.32.87.93 port 58004 [preauth]
Oct 10 09:24:37 compute-1 sshd-session[27455]: Invalid user steam from 194.32.87.93 port 58558
Oct 10 09:24:37 compute-1 sshd-session[27455]: Connection closed by invalid user steam 194.32.87.93 port 58558 [preauth]
Oct 10 09:24:38 compute-1 sshd-session[27457]: Invalid user mysql from 194.32.87.93 port 59154
Oct 10 09:24:38 compute-1 sshd-session[27457]: Connection closed by invalid user mysql 194.32.87.93 port 59154 [preauth]
Oct 10 09:24:39 compute-1 sshd-session[27459]: Connection closed by authenticating user root 194.32.87.93 port 59898 [preauth]
Oct 10 09:24:40 compute-1 sshd-session[27461]: Connection closed by authenticating user root 194.32.87.93 port 60412 [preauth]
Oct 10 09:24:41 compute-1 sshd-session[27463]: Invalid user deploy from 194.32.87.93 port 60932
Oct 10 09:24:41 compute-1 sshd-session[27463]: Connection closed by invalid user deploy 194.32.87.93 port 60932 [preauth]
Oct 10 09:24:42 compute-1 sshd-session[27465]: Invalid user deploy from 194.32.87.93 port 33240
Oct 10 09:24:42 compute-1 sshd-session[27465]: Connection closed by invalid user deploy 194.32.87.93 port 33240 [preauth]
Oct 10 09:24:42 compute-1 sshd-session[27467]: Invalid user oracle from 194.32.87.93 port 33808
Oct 10 09:24:42 compute-1 sshd-session[27467]: Connection closed by invalid user oracle 194.32.87.93 port 33808 [preauth]
Oct 10 09:24:43 compute-1 sshd-session[27469]: Connection closed by authenticating user root 194.32.87.93 port 34354 [preauth]
Oct 10 09:24:44 compute-1 sshd-session[27471]: Invalid user oracle from 194.32.87.93 port 34896
Oct 10 09:24:44 compute-1 sshd-session[27471]: Connection closed by invalid user oracle 194.32.87.93 port 34896 [preauth]
Oct 10 09:24:45 compute-1 sshd-session[27473]: Invalid user ubuntu from 194.32.87.93 port 35404
Oct 10 09:24:45 compute-1 sshd-session[27473]: Connection closed by invalid user ubuntu 194.32.87.93 port 35404 [preauth]
Oct 10 09:24:46 compute-1 sshd-session[27475]: Invalid user guest from 194.32.87.93 port 36214
Oct 10 09:24:46 compute-1 sshd-session[27475]: Connection closed by invalid user guest 194.32.87.93 port 36214 [preauth]
Oct 10 09:24:47 compute-1 sshd-session[27477]: Connection closed by authenticating user root 194.32.87.93 port 36778 [preauth]
Oct 10 09:24:48 compute-1 sshd-session[27479]: Connection closed by authenticating user root 194.32.87.93 port 37314 [preauth]
Oct 10 09:24:49 compute-1 sshd-session[27481]: Connection closed by authenticating user root 194.32.87.93 port 37864 [preauth]
Oct 10 09:24:50 compute-1 sshd-session[27483]: Connection closed by authenticating user root 194.32.87.93 port 38462 [preauth]
Oct 10 09:24:51 compute-1 sshd-session[27485]: Connection closed by authenticating user root 194.32.87.93 port 39066 [preauth]
Oct 10 09:24:51 compute-1 sshd-session[27487]: Invalid user ts3 from 194.32.87.93 port 39568
Oct 10 09:24:52 compute-1 sshd-session[27487]: Connection closed by invalid user ts3 194.32.87.93 port 39568 [preauth]
Oct 10 09:24:52 compute-1 sshd-session[27489]: Invalid user orangepi from 194.32.87.93 port 40162
Oct 10 09:24:52 compute-1 sshd-session[27489]: Connection closed by invalid user orangepi 194.32.87.93 port 40162 [preauth]
Oct 10 09:24:53 compute-1 sshd-session[27491]: Connection closed by authenticating user root 194.32.87.93 port 40832 [preauth]
Oct 10 09:24:54 compute-1 sshd-session[27493]: Invalid user devops from 194.32.87.93 port 41422
Oct 10 09:24:55 compute-1 sshd-session[27493]: Connection closed by invalid user devops 194.32.87.93 port 41422 [preauth]
Oct 10 09:24:55 compute-1 sshd-session[27495]: Connection closed by authenticating user root 194.32.87.93 port 42078 [preauth]
Oct 10 09:24:56 compute-1 sshd-session[27497]: Invalid user administrator from 194.32.87.93 port 42636
Oct 10 09:24:56 compute-1 sshd-session[27497]: Connection closed by invalid user administrator 194.32.87.93 port 42636 [preauth]
Oct 10 09:24:57 compute-1 sshd-session[27499]: Invalid user deploy from 194.32.87.93 port 43136
Oct 10 09:24:57 compute-1 sshd-session[27499]: Connection closed by invalid user deploy 194.32.87.93 port 43136 [preauth]
Oct 10 09:24:58 compute-1 sshd-session[27501]: Connection closed by authenticating user root 194.32.87.93 port 43734 [preauth]
Oct 10 09:24:59 compute-1 sshd-session[27503]: Invalid user linaro from 194.32.87.93 port 44336
Oct 10 09:24:59 compute-1 sshd-session[27503]: Connection closed by invalid user linaro 194.32.87.93 port 44336 [preauth]
Oct 10 09:25:00 compute-1 sshd-session[27505]: Connection closed by authenticating user root 194.32.87.93 port 44822 [preauth]
Oct 10 09:25:00 compute-1 sshd-session[27507]: Invalid user testuser from 194.32.87.93 port 45332
Oct 10 09:25:01 compute-1 sshd-session[27507]: Connection closed by invalid user testuser 194.32.87.93 port 45332 [preauth]
Oct 10 09:25:01 compute-1 sshd-session[27509]: Invalid user ubnt from 194.32.87.93 port 45782
Oct 10 09:25:01 compute-1 sshd-session[27509]: Connection closed by invalid user ubnt 194.32.87.93 port 45782 [preauth]
Oct 10 09:25:02 compute-1 sshd-session[27511]: Invalid user deploy from 194.32.87.93 port 46266
Oct 10 09:25:02 compute-1 sshd-session[27511]: Connection closed by invalid user deploy 194.32.87.93 port 46266 [preauth]
Oct 10 09:25:03 compute-1 sshd-session[27513]: Invalid user test from 194.32.87.93 port 46804
Oct 10 09:25:03 compute-1 sshd-session[27513]: Connection closed by invalid user test 194.32.87.93 port 46804 [preauth]
Oct 10 09:25:04 compute-1 sshd-session[27515]: Invalid user testuser from 194.32.87.93 port 47298
Oct 10 09:25:04 compute-1 sshd-session[27515]: Connection closed by invalid user testuser 194.32.87.93 port 47298 [preauth]
Oct 10 09:25:04 compute-1 sshd-session[27517]: Invalid user jenkins from 194.32.87.93 port 47836
Oct 10 09:25:05 compute-1 sshd-session[27517]: Connection closed by invalid user jenkins 194.32.87.93 port 47836 [preauth]
Oct 10 09:25:05 compute-1 sshd-session[27519]: Connection closed by authenticating user root 194.32.87.93 port 48378 [preauth]
Oct 10 09:25:06 compute-1 sshd-session[27521]: Invalid user dspace from 194.32.87.93 port 48856
Oct 10 09:25:06 compute-1 sshd-session[27521]: Connection closed by invalid user dspace 194.32.87.93 port 48856 [preauth]
Oct 10 09:25:07 compute-1 sshd-session[27523]: Invalid user odoo18 from 194.32.87.93 port 49330
Oct 10 09:25:07 compute-1 sshd-session[27523]: Connection closed by invalid user odoo18 194.32.87.93 port 49330 [preauth]
Oct 10 09:25:08 compute-1 sshd-session[27525]: Invalid user admin from 194.32.87.93 port 49804
Oct 10 09:25:08 compute-1 sshd-session[27525]: Connection closed by invalid user admin 194.32.87.93 port 49804 [preauth]
Oct 10 09:25:09 compute-1 sshd-session[27527]: Invalid user admin from 194.32.87.93 port 50286
Oct 10 09:25:09 compute-1 sshd-session[27527]: Connection closed by invalid user admin 194.32.87.93 port 50286 [preauth]
Oct 10 09:25:10 compute-1 sshd-session[27529]: Connection closed by authenticating user root 194.32.87.93 port 50934 [preauth]
Oct 10 09:25:11 compute-1 sshd-session[27531]: Invalid user debian from 194.32.87.93 port 51498
Oct 10 09:25:11 compute-1 sshd-session[27531]: Connection closed by invalid user debian 194.32.87.93 port 51498 [preauth]
Oct 10 09:25:11 compute-1 sshd-session[27533]: Invalid user guest from 194.32.87.93 port 52000
Oct 10 09:25:12 compute-1 sshd-session[27533]: Connection closed by invalid user guest 194.32.87.93 port 52000 [preauth]
Oct 10 09:25:12 compute-1 sshd-session[27535]: Invalid user db2inst1 from 194.32.87.93 port 52478
Oct 10 09:25:12 compute-1 sshd-session[27535]: Connection closed by invalid user db2inst1 194.32.87.93 port 52478 [preauth]
Oct 10 09:25:13 compute-1 sshd-session[27537]: Invalid user oracle from 194.32.87.93 port 52914
Oct 10 09:25:13 compute-1 sshd-session[27537]: Connection closed by invalid user oracle 194.32.87.93 port 52914 [preauth]
Oct 10 09:25:14 compute-1 sshd-session[27539]: Connection closed by authenticating user root 194.32.87.93 port 53382 [preauth]
Oct 10 09:25:15 compute-1 sshd-session[27541]: Invalid user hadoop from 194.32.87.93 port 53852
Oct 10 09:25:15 compute-1 sshd-session[27541]: Connection closed by invalid user hadoop 194.32.87.93 port 53852 [preauth]
Oct 10 09:25:15 compute-1 sshd-session[27543]: Invalid user esuser from 194.32.87.93 port 54318
Oct 10 09:25:16 compute-1 sshd-session[27543]: Connection closed by invalid user esuser 194.32.87.93 port 54318 [preauth]
Oct 10 09:25:16 compute-1 sshd-session[27545]: Invalid user vagrant from 194.32.87.93 port 54854
Oct 10 09:25:16 compute-1 sshd-session[27545]: Connection closed by invalid user vagrant 194.32.87.93 port 54854 [preauth]
Oct 10 09:25:17 compute-1 sshd-session[27547]: Invalid user admin from 194.32.87.93 port 55334
Oct 10 09:25:17 compute-1 sshd-session[27547]: Connection closed by invalid user admin 194.32.87.93 port 55334 [preauth]
Oct 10 09:26:59 compute-1 sshd-session[26534]: Received disconnect from 38.102.83.82 port 55570:11: disconnected by user
Oct 10 09:26:59 compute-1 sshd-session[26534]: Disconnected from user zuul 38.102.83.82 port 55570
Oct 10 09:26:59 compute-1 sshd-session[26531]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:26:59 compute-1 systemd[1]: session-8.scope: Deactivated successfully.
Oct 10 09:26:59 compute-1 systemd[1]: session-8.scope: Consumed 5.604s CPU time.
Oct 10 09:26:59 compute-1 systemd-logind[789]: Session 8 logged out. Waiting for processes to exit.
Oct 10 09:26:59 compute-1 systemd-logind[789]: Removed session 8.
Oct 10 09:33:17 compute-1 sshd-session[27552]: Accepted publickey for zuul from 192.168.122.30 port 43862 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:33:17 compute-1 systemd-logind[789]: New session 9 of user zuul.
Oct 10 09:33:17 compute-1 systemd[1]: Started Session 9 of User zuul.
Oct 10 09:33:17 compute-1 sshd-session[27552]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:33:18 compute-1 python3.9[27705]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:33:20 compute-1 sudo[27884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtuwxhwpquretmfrqhfbtlkramyjlztg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088799.6364918-57-101620520083910/AnsiballZ_command.py'
Oct 10 09:33:20 compute-1 sudo[27884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:33:20 compute-1 python3.9[27886]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:33:27 compute-1 sudo[27884]: pam_unix(sudo:session): session closed for user root
Oct 10 09:33:28 compute-1 sshd-session[27555]: Connection closed by 192.168.122.30 port 43862
Oct 10 09:33:28 compute-1 sshd-session[27552]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:33:28 compute-1 systemd[1]: session-9.scope: Deactivated successfully.
Oct 10 09:33:28 compute-1 systemd[1]: session-9.scope: Consumed 8.341s CPU time.
Oct 10 09:33:28 compute-1 systemd-logind[789]: Session 9 logged out. Waiting for processes to exit.
Oct 10 09:33:28 compute-1 systemd-logind[789]: Removed session 9.
Oct 10 09:33:43 compute-1 sshd-session[27946]: Accepted publickey for zuul from 192.168.122.30 port 36050 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:33:43 compute-1 systemd-logind[789]: New session 10 of user zuul.
Oct 10 09:33:43 compute-1 systemd[1]: Started Session 10 of User zuul.
Oct 10 09:33:43 compute-1 sshd-session[27946]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:33:44 compute-1 python3.9[28099]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 10 09:33:45 compute-1 python3.9[28273]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:33:46 compute-1 sudo[28423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drjaofbmkjriowpdvpfibjraetsxahlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088826.0462646-94-14746250566850/AnsiballZ_command.py'
Oct 10 09:33:46 compute-1 sudo[28423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:33:46 compute-1 python3.9[28425]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:33:46 compute-1 sudo[28423]: pam_unix(sudo:session): session closed for user root
Oct 10 09:33:47 compute-1 sudo[28576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocmwajhclhczeqljjvjzmfrdbcugmqtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088827.2010603-130-186941809889652/AnsiballZ_stat.py'
Oct 10 09:33:47 compute-1 sudo[28576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:33:47 compute-1 python3.9[28578]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:33:47 compute-1 sudo[28576]: pam_unix(sudo:session): session closed for user root
Oct 10 09:33:48 compute-1 sudo[28728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lappzpgkjfgwwxlmvdvefleknohvavhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088828.1370742-154-113051011869844/AnsiballZ_file.py'
Oct 10 09:33:48 compute-1 sudo[28728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:33:48 compute-1 python3.9[28730]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:33:48 compute-1 sudo[28728]: pam_unix(sudo:session): session closed for user root
Oct 10 09:33:49 compute-1 sudo[28880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnaouerbfogwzlwhmwdelkazphysedru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088829.1543286-178-261762968508380/AnsiballZ_stat.py'
Oct 10 09:33:49 compute-1 sudo[28880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:33:49 compute-1 python3.9[28882]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:33:49 compute-1 sudo[28880]: pam_unix(sudo:session): session closed for user root
Oct 10 09:33:50 compute-1 sudo[29003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evkhneazgzmaimfaqmntydcpmjicumdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088829.1543286-178-261762968508380/AnsiballZ_copy.py'
Oct 10 09:33:50 compute-1 sudo[29003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:33:50 compute-1 python3.9[29005]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1760088829.1543286-178-261762968508380/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:33:50 compute-1 sudo[29003]: pam_unix(sudo:session): session closed for user root
Oct 10 09:33:51 compute-1 sudo[29155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rciluemxxmadjkjptsbplbgbdoklktyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088830.839324-223-106540388787189/AnsiballZ_setup.py'
Oct 10 09:33:51 compute-1 sudo[29155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:33:51 compute-1 python3.9[29157]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:33:51 compute-1 sudo[29155]: pam_unix(sudo:session): session closed for user root
Oct 10 09:33:52 compute-1 sudo[29311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgfamfmkfmudymkbdrwguttgwrhevguh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088831.9665213-247-97114606094314/AnsiballZ_file.py'
Oct 10 09:33:52 compute-1 sudo[29311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:33:52 compute-1 python3.9[29313]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:33:52 compute-1 sudo[29311]: pam_unix(sudo:session): session closed for user root
Oct 10 09:33:53 compute-1 python3.9[29463]: ansible-ansible.builtin.service_facts Invoked
Oct 10 09:34:00 compute-1 python3.9[29718]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:34:01 compute-1 python3.9[29868]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:34:02 compute-1 python3.9[30022]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:34:03 compute-1 sudo[30178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aebwoyedgfyypuveoetupgpcwdtbkfvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088842.9269643-391-2882903135901/AnsiballZ_setup.py'
Oct 10 09:34:03 compute-1 sudo[30178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:34:03 compute-1 python3.9[30180]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:34:03 compute-1 sudo[30178]: pam_unix(sudo:session): session closed for user root
Oct 10 09:34:04 compute-1 sudo[30262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiwbozrdltokmsoksuvynbpyfrbkoacm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088842.9269643-391-2882903135901/AnsiballZ_dnf.py'
Oct 10 09:34:04 compute-1 sudo[30262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:34:04 compute-1 python3.9[30264]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:34:49 compute-1 systemd[1]: Reloading.
Oct 10 09:34:49 compute-1 systemd-rc-local-generator[30461]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:34:49 compute-1 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct 10 09:34:50 compute-1 systemd[1]: Reloading.
Oct 10 09:34:50 compute-1 systemd-rc-local-generator[30499]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:34:50 compute-1 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct 10 09:34:50 compute-1 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct 10 09:34:50 compute-1 systemd[1]: Reloading.
Oct 10 09:34:50 compute-1 systemd-rc-local-generator[30537]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:34:50 compute-1 systemd[1]: Listening on LVM2 poll daemon socket.
Oct 10 09:34:50 compute-1 dbus-broker-launch[759]: Noticed file-system modification, trigger reload.
Oct 10 09:34:50 compute-1 dbus-broker-launch[759]: Noticed file-system modification, trigger reload.
Oct 10 09:34:50 compute-1 dbus-broker-launch[759]: Noticed file-system modification, trigger reload.
Oct 10 09:35:59 compute-1 kernel: SELinux:  Converting 2713 SID table entries...
Oct 10 09:35:59 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 09:35:59 compute-1 kernel: SELinux:  policy capability open_perms=1
Oct 10 09:35:59 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 09:35:59 compute-1 kernel: SELinux:  policy capability always_check_network=0
Oct 10 09:35:59 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 09:35:59 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 09:35:59 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 09:35:59 compute-1 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Oct 10 09:35:59 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 10 09:35:59 compute-1 systemd[1]: Starting man-db-cache-update.service...
Oct 10 09:35:59 compute-1 systemd[1]: Reloading.
Oct 10 09:35:59 compute-1 systemd-rc-local-generator[30850]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:36:00 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 10 09:36:00 compute-1 systemd[1]: Starting PackageKit Daemon...
Oct 10 09:36:00 compute-1 PackageKit[30993]: daemon start
Oct 10 09:36:00 compute-1 systemd[1]: Started PackageKit Daemon.
Oct 10 09:36:00 compute-1 sudo[30262]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:01 compute-1 sudo[31768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfxnryxkqtwwqpexxdhmzhybcghfnekb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088960.7934017-427-24844896421815/AnsiballZ_command.py'
Oct 10 09:36:01 compute-1 sudo[31768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:01 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 10 09:36:01 compute-1 systemd[1]: Finished man-db-cache-update.service.
Oct 10 09:36:01 compute-1 systemd[1]: man-db-cache-update.service: Consumed 1.422s CPU time.
Oct 10 09:36:01 compute-1 systemd[1]: run-r470426eaf19f4e2db9d170b5e5fda398.service: Deactivated successfully.
Oct 10 09:36:01 compute-1 python3.9[31770]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:36:02 compute-1 sudo[31768]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:03 compute-1 sudo[32050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnpmqtdanxqvniqyrrjmsxlmqgwhmtex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088962.7695017-452-65217228917376/AnsiballZ_selinux.py'
Oct 10 09:36:03 compute-1 sudo[32050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:03 compute-1 python3.9[32052]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 10 09:36:03 compute-1 sudo[32050]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:04 compute-1 sudo[32202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwaijyhmklaorzctznlvbnjmosepzgwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088964.3561037-484-62494889453628/AnsiballZ_command.py'
Oct 10 09:36:04 compute-1 sudo[32202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:04 compute-1 python3.9[32204]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 10 09:36:05 compute-1 sudo[32202]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:06 compute-1 sudo[32356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gytdykzfmoyoptryppwcqlgyhcmlyqwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088966.083772-508-145307550984685/AnsiballZ_file.py'
Oct 10 09:36:06 compute-1 sudo[32356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:07 compute-1 python3.9[32358]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:36:07 compute-1 sudo[32356]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:09 compute-1 sudo[32508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxybrbrptnvdozfctzkfjbvspoinfomd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088968.490918-532-127682129202705/AnsiballZ_mount.py'
Oct 10 09:36:09 compute-1 sudo[32508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:09 compute-1 python3.9[32510]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 10 09:36:09 compute-1 sudo[32508]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:10 compute-1 sudo[32660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qubkvhhbxmxdmvghmberymjnjtodhxjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088970.1585186-616-18847277639810/AnsiballZ_file.py'
Oct 10 09:36:10 compute-1 sudo[32660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:10 compute-1 python3.9[32662]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:36:10 compute-1 sudo[32660]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:11 compute-1 sudo[32812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmdqomgvhsvdbjwnohhhrjewhhsylbhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088970.906032-640-201304920306786/AnsiballZ_stat.py'
Oct 10 09:36:11 compute-1 sudo[32812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:14 compute-1 python3.9[32814]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:36:14 compute-1 sudo[32812]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:15 compute-1 sudo[32935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvmieqpfutueiqtmgkmkgkvwwhevwhrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088970.906032-640-201304920306786/AnsiballZ_copy.py'
Oct 10 09:36:15 compute-1 sudo[32935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:15 compute-1 python3.9[32937]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760088970.906032-640-201304920306786/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=588de6fcfc4f8f2f1febb9ce163ed2886e4b0ed4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:36:15 compute-1 sudo[32935]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:18 compute-1 sudo[33087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdfsagfmudmeiodkdgikwkobmdrvdnev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088978.491065-721-22570149828883/AnsiballZ_getent.py'
Oct 10 09:36:18 compute-1 sudo[33087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:19 compute-1 python3.9[33089]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 10 09:36:19 compute-1 sudo[33087]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:20 compute-1 sudo[33240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpczniuskmnheduijnyisokefulgqvff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088979.6040375-745-158974487081747/AnsiballZ_group.py'
Oct 10 09:36:20 compute-1 sudo[33240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:20 compute-1 python3.9[33242]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 10 09:36:20 compute-1 groupadd[33243]: group added to /etc/group: name=qemu, GID=107
Oct 10 09:36:20 compute-1 groupadd[33243]: group added to /etc/gshadow: name=qemu
Oct 10 09:36:20 compute-1 groupadd[33243]: new group: name=qemu, GID=107
Oct 10 09:36:20 compute-1 sudo[33240]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:21 compute-1 sudo[33398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajgakkraxlmyyxsmscsqbybuazwnpjuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088980.7313054-769-118766646052216/AnsiballZ_user.py'
Oct 10 09:36:21 compute-1 sudo[33398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:21 compute-1 python3.9[33400]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 10 09:36:21 compute-1 useradd[33402]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Oct 10 09:36:21 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 09:36:21 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 09:36:21 compute-1 sudo[33398]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:22 compute-1 sudo[33559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxtqonfuqmjzfvrxmqclylvedclkhpwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088981.794192-793-207643029839818/AnsiballZ_getent.py'
Oct 10 09:36:22 compute-1 sudo[33559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:22 compute-1 python3.9[33561]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 10 09:36:22 compute-1 sudo[33559]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:23 compute-1 sudo[33712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbldpwkyxgoshbylotjqsigbrjcejvrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088982.82666-817-12395909696524/AnsiballZ_group.py'
Oct 10 09:36:23 compute-1 sudo[33712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:23 compute-1 python3.9[33714]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 10 09:36:23 compute-1 groupadd[33715]: group added to /etc/group: name=hugetlbfs, GID=42477
Oct 10 09:36:23 compute-1 groupadd[33715]: group added to /etc/gshadow: name=hugetlbfs
Oct 10 09:36:23 compute-1 groupadd[33715]: new group: name=hugetlbfs, GID=42477
Oct 10 09:36:23 compute-1 sudo[33712]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:24 compute-1 sudo[33870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utwnuefakfoioayryqmnzzxjkbhtnnkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088983.8352563-844-166514177263448/AnsiballZ_file.py'
Oct 10 09:36:24 compute-1 sudo[33870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:24 compute-1 python3.9[33872]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 10 09:36:24 compute-1 sudo[33870]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:25 compute-1 sudo[34022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnimyifhqdofwieshgfnxphjmlspmnqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088984.9067652-877-130633967613756/AnsiballZ_dnf.py'
Oct 10 09:36:25 compute-1 sudo[34022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:25 compute-1 python3.9[34024]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:36:27 compute-1 sudo[34022]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:27 compute-1 sudo[34175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aekmzxgggzybhoquntxthvgocmpgwmyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088987.610371-901-112148158629457/AnsiballZ_file.py'
Oct 10 09:36:27 compute-1 sudo[34175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:28 compute-1 python3.9[34177]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:36:28 compute-1 sudo[34175]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:28 compute-1 sudo[34327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejbetdgbqsiynlgjereelrdomaiuckfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088988.3343694-925-119649854274539/AnsiballZ_stat.py'
Oct 10 09:36:28 compute-1 sudo[34327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:28 compute-1 python3.9[34329]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:36:28 compute-1 sudo[34327]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:29 compute-1 sudo[34450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbtiswztzlcigsiohoupjcjmtdidiykw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088988.3343694-925-119649854274539/AnsiballZ_copy.py'
Oct 10 09:36:29 compute-1 sudo[34450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:29 compute-1 python3.9[34452]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760088988.3343694-925-119649854274539/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:36:29 compute-1 sudo[34450]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:30 compute-1 sudo[34602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iatooabynelxfelpapkdqyoacnnrngsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088989.7659514-970-20194701826703/AnsiballZ_systemd.py'
Oct 10 09:36:30 compute-1 sudo[34602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:30 compute-1 python3.9[34604]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 09:36:30 compute-1 systemd[1]: Starting Load Kernel Modules...
Oct 10 09:36:30 compute-1 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct 10 09:36:30 compute-1 kernel: Bridge firewalling registered
Oct 10 09:36:30 compute-1 systemd-modules-load[34608]: Inserted module 'br_netfilter'
Oct 10 09:36:30 compute-1 systemd[1]: Finished Load Kernel Modules.
Oct 10 09:36:30 compute-1 sudo[34602]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:31 compute-1 sudo[34761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gllqqnwplvyrhiamvluhavakqsypselg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088991.052329-994-192510616130717/AnsiballZ_stat.py'
Oct 10 09:36:31 compute-1 sudo[34761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:31 compute-1 python3.9[34763]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:36:31 compute-1 sudo[34761]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:32 compute-1 sudo[34884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccrbnzwihmkpgppilsgxcnoflvkkpjny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088991.052329-994-192510616130717/AnsiballZ_copy.py'
Oct 10 09:36:32 compute-1 sudo[34884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:32 compute-1 python3.9[34886]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760088991.052329-994-192510616130717/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:36:32 compute-1 sudo[34884]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:32 compute-1 sudo[35036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-staeaqoazsiwtwayasmmxzerizdnleen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760088992.6832495-1048-250693647498231/AnsiballZ_dnf.py'
Oct 10 09:36:32 compute-1 sudo[35036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:33 compute-1 python3.9[35038]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:36:36 compute-1 dbus-broker-launch[759]: Noticed file-system modification, trigger reload.
Oct 10 09:36:36 compute-1 dbus-broker-launch[759]: Noticed file-system modification, trigger reload.
Oct 10 09:36:36 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 10 09:36:36 compute-1 systemd[1]: Starting man-db-cache-update.service...
Oct 10 09:36:36 compute-1 systemd[1]: Reloading.
Oct 10 09:36:36 compute-1 systemd-rc-local-generator[35098]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:36:37 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 10 09:36:37 compute-1 sudo[35036]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:38 compute-1 python3.9[36363]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:36:39 compute-1 python3.9[37319]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 10 09:36:40 compute-1 python3.9[38028]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:36:41 compute-1 sudo[38921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwrxqhioztluxnbolhwibwdzjzxtftuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089000.8953435-1165-256874468814933/AnsiballZ_command.py'
Oct 10 09:36:41 compute-1 sudo[38921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:41 compute-1 python3.9[38943]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:36:41 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 10 09:36:41 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 10 09:36:41 compute-1 systemd[1]: Finished man-db-cache-update.service.
Oct 10 09:36:41 compute-1 systemd[1]: man-db-cache-update.service: Consumed 5.789s CPU time.
Oct 10 09:36:41 compute-1 systemd[1]: run-r51d66231484b492bba97dc35587a0e5c.service: Deactivated successfully.
Oct 10 09:36:42 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 10 09:36:42 compute-1 sudo[38921]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:42 compute-1 sudo[39587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlfdnwwyblqpwwitfogicbqcbigqgeok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089002.5223784-1192-231271751336331/AnsiballZ_systemd.py'
Oct 10 09:36:42 compute-1 sudo[39587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:43 compute-1 python3.9[39589]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:36:43 compute-1 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 10 09:36:43 compute-1 systemd[1]: tuned.service: Deactivated successfully.
Oct 10 09:36:43 compute-1 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 10 09:36:43 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 10 09:36:43 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 10 09:36:43 compute-1 sudo[39587]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:44 compute-1 python3.9[39751]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 10 09:36:47 compute-1 sudo[39901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbiojsggupgbzwguepdqktheiohaflnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089007.529766-1363-199847898922061/AnsiballZ_systemd.py'
Oct 10 09:36:47 compute-1 sudo[39901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:48 compute-1 python3.9[39903]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:36:48 compute-1 systemd[1]: Reloading.
Oct 10 09:36:48 compute-1 systemd-rc-local-generator[39930]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:36:48 compute-1 systemd[1]: Starting dnf makecache...
Oct 10 09:36:48 compute-1 sudo[39901]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:48 compute-1 dnf[39942]: Failed determining last makecache time.
Oct 10 09:36:48 compute-1 dnf[39942]: delorean-openstack-barbican-42b4c41831408a8e323 107 kB/s | 3.0 kB     00:00
Oct 10 09:36:48 compute-1 dnf[39942]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 161 kB/s | 3.0 kB     00:00
Oct 10 09:36:48 compute-1 dnf[39942]: delorean-openstack-cinder-1c00d6490d88e436f26ef 161 kB/s | 3.0 kB     00:00
Oct 10 09:36:48 compute-1 dnf[39942]: delorean-python-stevedore-c4acc5639fd2329372142 179 kB/s | 3.0 kB     00:00
Oct 10 09:36:48 compute-1 dnf[39942]: delorean-python-cloudkitty-tests-tempest-3961dc 169 kB/s | 3.0 kB     00:00
Oct 10 09:36:48 compute-1 dnf[39942]: delorean-diskimage-builder-43381184423c185801b5 158 kB/s | 3.0 kB     00:00
Oct 10 09:36:48 compute-1 dnf[39942]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 177 kB/s | 3.0 kB     00:00
Oct 10 09:36:48 compute-1 dnf[39942]: delorean-python-designate-tests-tempest-347fdbc 159 kB/s | 3.0 kB     00:00
Oct 10 09:36:48 compute-1 dnf[39942]: delorean-openstack-glance-1fd12c29b339f30fe823e 159 kB/s | 3.0 kB     00:00
Oct 10 09:36:48 compute-1 dnf[39942]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 154 kB/s | 3.0 kB     00:00
Oct 10 09:36:48 compute-1 dnf[39942]: delorean-openstack-manila-3c01b7181572c95dac462 163 kB/s | 3.0 kB     00:00
Oct 10 09:36:48 compute-1 sudo[40103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cezbjlvgehxtpdhrzguithowsyjclrzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089008.6004298-1363-42737803513892/AnsiballZ_systemd.py'
Oct 10 09:36:48 compute-1 sudo[40103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:48 compute-1 dnf[39942]: delorean-python-vmware-nsxlib-458234972d1428ac9 163 kB/s | 3.0 kB     00:00
Oct 10 09:36:48 compute-1 dnf[39942]: delorean-openstack-octavia-ba397f07a7331190208c 157 kB/s | 3.0 kB     00:00
Oct 10 09:36:48 compute-1 dnf[39942]: delorean-openstack-watcher-c014f81a8647287f6dcc 142 kB/s | 3.0 kB     00:00
Oct 10 09:36:49 compute-1 dnf[39942]: delorean-edpm-image-builder-55ba53cf215b14ed95b 152 kB/s | 3.0 kB     00:00
Oct 10 09:36:49 compute-1 dnf[39942]: delorean-puppet-ceph-b0c245ccde541a63fde0564366 141 kB/s | 3.0 kB     00:00
Oct 10 09:36:49 compute-1 dnf[39942]: delorean-openstack-swift-dc98a8463506ac520c469a 140 kB/s | 3.0 kB     00:00
Oct 10 09:36:49 compute-1 dnf[39942]: delorean-python-tempestconf-8515371b7cceebd4282 142 kB/s | 3.0 kB     00:00
Oct 10 09:36:49 compute-1 dnf[39942]: delorean-openstack-heat-ui-013accbfd179753bc3f0 138 kB/s | 3.0 kB     00:00
Oct 10 09:36:49 compute-1 dnf[39942]: CentOS Stream 9 - BaseOS                         71 kB/s | 6.7 kB     00:00
Oct 10 09:36:49 compute-1 python3.9[40106]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:36:49 compute-1 systemd[1]: Reloading.
Oct 10 09:36:49 compute-1 dnf[39942]: CentOS Stream 9 - AppStream                      71 kB/s | 6.8 kB     00:00
Oct 10 09:36:49 compute-1 systemd-rc-local-generator[40141]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:36:49 compute-1 dnf[39942]: CentOS Stream 9 - CRB                            61 kB/s | 6.6 kB     00:00
Oct 10 09:36:49 compute-1 sudo[40103]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:49 compute-1 dnf[39942]: CentOS Stream 9 - Extras packages                64 kB/s | 8.0 kB     00:00
Oct 10 09:36:49 compute-1 dnf[39942]: dlrn-antelope-testing                            81 kB/s | 3.0 kB     00:00
Oct 10 09:36:49 compute-1 dnf[39942]: dlrn-antelope-build-deps                        103 kB/s | 3.0 kB     00:00
Oct 10 09:36:49 compute-1 dnf[39942]: centos9-rabbitmq                                 83 kB/s | 3.0 kB     00:00
Oct 10 09:36:49 compute-1 dnf[39942]: centos9-storage                                 114 kB/s | 3.0 kB     00:00
Oct 10 09:36:49 compute-1 dnf[39942]: centos9-opstools                                112 kB/s | 3.0 kB     00:00
Oct 10 09:36:49 compute-1 dnf[39942]: NFV SIG OpenvSwitch                             141 kB/s | 3.0 kB     00:00
Oct 10 09:36:50 compute-1 dnf[39942]: repo-setup-centos-appstream                     215 kB/s | 4.4 kB     00:00
Oct 10 09:36:50 compute-1 dnf[39942]: repo-setup-centos-baseos                        189 kB/s | 3.9 kB     00:00
Oct 10 09:36:50 compute-1 dnf[39942]: repo-setup-centos-highavailability              191 kB/s | 3.9 kB     00:00
Oct 10 09:36:50 compute-1 dnf[39942]: repo-setup-centos-powertools                    208 kB/s | 4.3 kB     00:00
Oct 10 09:36:50 compute-1 sudo[40323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axqzllusmhbsahfskfzetkiegorywdzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089009.9929368-1411-150664085048714/AnsiballZ_command.py'
Oct 10 09:36:50 compute-1 sudo[40323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:50 compute-1 dnf[39942]: Extra Packages for Enterprise Linux 9 - x86_64  200 kB/s |  25 kB     00:00
Oct 10 09:36:50 compute-1 python3.9[40325]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:36:50 compute-1 sudo[40323]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:50 compute-1 dnf[39942]: Metadata cache created.
Oct 10 09:36:50 compute-1 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 10 09:36:50 compute-1 systemd[1]: Finished dnf makecache.
Oct 10 09:36:50 compute-1 systemd[1]: dnf-makecache.service: Consumed 1.775s CPU time.
Oct 10 09:36:51 compute-1 sudo[40476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spemxiqrmchtxfgjtbsnkrvxvrzwhnnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089010.9256322-1435-121074920227039/AnsiballZ_command.py'
Oct 10 09:36:51 compute-1 sudo[40476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:51 compute-1 python3.9[40478]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:36:51 compute-1 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct 10 09:36:51 compute-1 sudo[40476]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:52 compute-1 sudo[40629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zksdkxzrhvnclvjrblaxchytuzubkxou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089011.9654014-1459-6663502695664/AnsiballZ_command.py'
Oct 10 09:36:52 compute-1 sudo[40629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:52 compute-1 python3.9[40631]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:36:54 compute-1 sudo[40629]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:54 compute-1 sudo[40791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaizwxfzkvzzmednxwxuwrrzrtsuseol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089014.2985284-1483-22153409859312/AnsiballZ_command.py'
Oct 10 09:36:54 compute-1 sudo[40791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:54 compute-1 python3.9[40793]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:36:54 compute-1 sudo[40791]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:55 compute-1 sudo[40944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvqbmnadtngrecuosprbhdgjsjgzjhoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089015.1186666-1507-270428423000642/AnsiballZ_systemd.py'
Oct 10 09:36:55 compute-1 sudo[40944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:36:55 compute-1 python3.9[40946]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 09:36:56 compute-1 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 10 09:36:56 compute-1 systemd[1]: Stopped Apply Kernel Variables.
Oct 10 09:36:56 compute-1 systemd[1]: Stopping Apply Kernel Variables...
Oct 10 09:36:56 compute-1 systemd[1]: Starting Apply Kernel Variables...
Oct 10 09:36:56 compute-1 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct 10 09:36:56 compute-1 systemd[1]: Finished Apply Kernel Variables.
Oct 10 09:36:56 compute-1 sudo[40944]: pam_unix(sudo:session): session closed for user root
Oct 10 09:36:57 compute-1 sshd-session[27949]: Connection closed by 192.168.122.30 port 36050
Oct 10 09:36:57 compute-1 sshd-session[27946]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:36:57 compute-1 systemd[1]: session-10.scope: Deactivated successfully.
Oct 10 09:36:57 compute-1 systemd[1]: session-10.scope: Consumed 2min 18.572s CPU time.
Oct 10 09:36:57 compute-1 systemd-logind[789]: Session 10 logged out. Waiting for processes to exit.
Oct 10 09:36:57 compute-1 systemd-logind[789]: Removed session 10.
Oct 10 09:37:02 compute-1 sshd-session[40976]: Accepted publickey for zuul from 192.168.122.30 port 40162 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:37:02 compute-1 systemd-logind[789]: New session 11 of user zuul.
Oct 10 09:37:02 compute-1 systemd[1]: Started Session 11 of User zuul.
Oct 10 09:37:02 compute-1 sshd-session[40976]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:37:03 compute-1 python3.9[41129]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:37:05 compute-1 sudo[41283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hknsdbtatllitzyywljiyqtxrmqvttjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089024.567339-69-34798089742927/AnsiballZ_getent.py'
Oct 10 09:37:05 compute-1 sudo[41283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:05 compute-1 python3.9[41285]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 10 09:37:05 compute-1 sudo[41283]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:05 compute-1 sudo[41436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwmketwosgedvxdkqcecbokqfevfgcmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089025.5013158-93-134541191239998/AnsiballZ_group.py'
Oct 10 09:37:05 compute-1 sudo[41436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:06 compute-1 python3.9[41438]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 10 09:37:06 compute-1 groupadd[41439]: group added to /etc/group: name=openvswitch, GID=42476
Oct 10 09:37:06 compute-1 groupadd[41439]: group added to /etc/gshadow: name=openvswitch
Oct 10 09:37:06 compute-1 groupadd[41439]: new group: name=openvswitch, GID=42476
Oct 10 09:37:06 compute-1 sudo[41436]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:07 compute-1 sudo[41594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjbtehkrrqxylxiwtxiqyklxfdlpbtho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089026.5364408-117-126748194671119/AnsiballZ_user.py'
Oct 10 09:37:07 compute-1 sudo[41594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:07 compute-1 python3.9[41596]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 10 09:37:07 compute-1 useradd[41598]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Oct 10 09:37:07 compute-1 useradd[41598]: add 'openvswitch' to group 'hugetlbfs'
Oct 10 09:37:07 compute-1 useradd[41598]: add 'openvswitch' to shadow group 'hugetlbfs'
Oct 10 09:37:07 compute-1 sudo[41594]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:08 compute-1 sudo[41754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnrgjgbyjqnvoyqrthwzkpslwjpnsvkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089027.7028797-147-252989666454924/AnsiballZ_setup.py'
Oct 10 09:37:08 compute-1 sudo[41754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:08 compute-1 python3.9[41756]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:37:08 compute-1 sudo[41754]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:08 compute-1 sudo[41838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvsfzqkogrvlqwrinlzacgezpcwgcdwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089027.7028797-147-252989666454924/AnsiballZ_dnf.py'
Oct 10 09:37:08 compute-1 sudo[41838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:09 compute-1 python3.9[41840]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 10 09:37:11 compute-1 sudo[41838]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:12 compute-1 sudo[42002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iomtppofmaqawgoizsvvkcdfinfkltey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089031.7230313-189-177818258537934/AnsiballZ_dnf.py'
Oct 10 09:37:12 compute-1 sudo[42002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:12 compute-1 python3.9[42004]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:37:23 compute-1 kernel: SELinux:  Converting 2724 SID table entries...
Oct 10 09:37:23 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 09:37:23 compute-1 kernel: SELinux:  policy capability open_perms=1
Oct 10 09:37:23 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 09:37:23 compute-1 kernel: SELinux:  policy capability always_check_network=0
Oct 10 09:37:23 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 09:37:23 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 09:37:23 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 09:37:23 compute-1 groupadd[42027]: group added to /etc/group: name=unbound, GID=993
Oct 10 09:37:23 compute-1 groupadd[42027]: group added to /etc/gshadow: name=unbound
Oct 10 09:37:23 compute-1 groupadd[42027]: new group: name=unbound, GID=993
Oct 10 09:37:23 compute-1 useradd[42034]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Oct 10 09:37:23 compute-1 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Oct 10 09:37:23 compute-1 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct 10 09:37:24 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 10 09:37:25 compute-1 systemd[1]: Starting man-db-cache-update.service...
Oct 10 09:37:25 compute-1 systemd[1]: Reloading.
Oct 10 09:37:25 compute-1 systemd-rc-local-generator[42532]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:37:25 compute-1 systemd-sysv-generator[42536]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:37:25 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 10 09:37:25 compute-1 sudo[42002]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:26 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 10 09:37:26 compute-1 systemd[1]: Finished man-db-cache-update.service.
Oct 10 09:37:26 compute-1 systemd[1]: run-r55563bbd2164407681bba82a98a79d56.service: Deactivated successfully.
Oct 10 09:37:28 compute-1 sudo[43103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqvtnfakrctckjlbgcbgvzgnhecncdyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089048.1339118-213-179823034727661/AnsiballZ_systemd.py'
Oct 10 09:37:28 compute-1 sudo[43103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:29 compute-1 python3.9[43105]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 09:37:29 compute-1 systemd[1]: Reloading.
Oct 10 09:37:29 compute-1 systemd-rc-local-generator[43136]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:37:29 compute-1 systemd-sysv-generator[43140]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:37:29 compute-1 systemd[1]: Starting Open vSwitch Database Unit...
Oct 10 09:37:29 compute-1 chown[43148]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct 10 09:37:29 compute-1 ovs-ctl[43153]: /etc/openvswitch/conf.db does not exist ... (warning).
Oct 10 09:37:29 compute-1 ovs-ctl[43153]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Oct 10 09:37:29 compute-1 ovs-ctl[43153]: Starting ovsdb-server [  OK  ]
Oct 10 09:37:29 compute-1 ovs-vsctl[43202]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct 10 09:37:29 compute-1 ovs-vsctl[43222]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"ee0899c1-415d-4aa8-abe8-1240b4e8bf2c\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct 10 09:37:29 compute-1 ovs-ctl[43153]: Configuring Open vSwitch system IDs [  OK  ]
Oct 10 09:37:29 compute-1 ovs-vsctl[43227]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Oct 10 09:37:29 compute-1 ovs-ctl[43153]: Enabling remote OVSDB managers [  OK  ]
Oct 10 09:37:29 compute-1 systemd[1]: Started Open vSwitch Database Unit.
Oct 10 09:37:29 compute-1 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct 10 09:37:29 compute-1 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct 10 09:37:29 compute-1 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct 10 09:37:30 compute-1 kernel: openvswitch: Open vSwitch switching datapath
Oct 10 09:37:30 compute-1 ovs-ctl[43273]: Inserting openvswitch module [  OK  ]
Oct 10 09:37:30 compute-1 ovs-ctl[43242]: Starting ovs-vswitchd [  OK  ]
Oct 10 09:37:30 compute-1 ovs-vsctl[43290]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-1
Oct 10 09:37:30 compute-1 ovs-ctl[43242]: Enabling remote OVSDB managers [  OK  ]
Oct 10 09:37:30 compute-1 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct 10 09:37:30 compute-1 systemd[1]: Starting Open vSwitch...
Oct 10 09:37:30 compute-1 systemd[1]: Finished Open vSwitch.
Oct 10 09:37:30 compute-1 sudo[43103]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:31 compute-1 python3.9[43442]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:37:32 compute-1 sudo[43592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gakjjerdvykongpgrlwpdewbahqkpydf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089051.5935726-267-264878421555911/AnsiballZ_sefcontext.py'
Oct 10 09:37:32 compute-1 sudo[43592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:32 compute-1 python3.9[43594]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 10 09:37:33 compute-1 kernel: SELinux:  Converting 2738 SID table entries...
Oct 10 09:37:33 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 09:37:33 compute-1 kernel: SELinux:  policy capability open_perms=1
Oct 10 09:37:33 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 09:37:33 compute-1 kernel: SELinux:  policy capability always_check_network=0
Oct 10 09:37:33 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 09:37:33 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 09:37:33 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 09:37:33 compute-1 sudo[43592]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:34 compute-1 python3.9[43749]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:37:35 compute-1 sudo[43905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnfyxnodsmwgdigthlarmacqbgwdpbuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089055.4552038-321-39521221124280/AnsiballZ_dnf.py'
Oct 10 09:37:35 compute-1 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Oct 10 09:37:35 compute-1 sudo[43905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:36 compute-1 python3.9[43907]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:37:37 compute-1 sudo[43905]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:38 compute-1 sudo[44058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhgqysesujwlcuqmtabydrdsnkpieesd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089057.475793-345-208577309027634/AnsiballZ_command.py'
Oct 10 09:37:38 compute-1 sudo[44058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:38 compute-1 python3.9[44060]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:37:38 compute-1 sudo[44058]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:39 compute-1 sudo[44345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlobcrzsazuodtxvggkdwigzmgbbqfbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089059.1557598-369-241589440502185/AnsiballZ_file.py'
Oct 10 09:37:39 compute-1 sudo[44345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:39 compute-1 python3.9[44347]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 10 09:37:39 compute-1 sudo[44345]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:40 compute-1 python3.9[44497]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:37:41 compute-1 sudo[44649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arozpzqirmufxvoyifhxbfclkhubzklp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089061.1000228-417-208866483510771/AnsiballZ_dnf.py'
Oct 10 09:37:41 compute-1 sudo[44649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:41 compute-1 python3.9[44651]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:37:43 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 10 09:37:43 compute-1 systemd[1]: Starting man-db-cache-update.service...
Oct 10 09:37:43 compute-1 systemd[1]: Reloading.
Oct 10 09:37:43 compute-1 systemd-rc-local-generator[44690]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:37:43 compute-1 systemd-sysv-generator[44693]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:37:43 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 10 09:37:43 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 10 09:37:43 compute-1 systemd[1]: Finished man-db-cache-update.service.
Oct 10 09:37:43 compute-1 systemd[1]: run-r87a2c1727f9c41cab47cfb3e9d97f3de.service: Deactivated successfully.
Oct 10 09:37:43 compute-1 sudo[44649]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:44 compute-1 sudo[44966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jodubdhnveaqexievxqkgjlgwzzzafzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089064.2082047-441-170010952743815/AnsiballZ_systemd.py'
Oct 10 09:37:44 compute-1 sudo[44966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:44 compute-1 python3.9[44968]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 09:37:44 compute-1 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct 10 09:37:44 compute-1 systemd[1]: Stopped Network Manager Wait Online.
Oct 10 09:37:44 compute-1 systemd[1]: Stopping Network Manager Wait Online...
Oct 10 09:37:44 compute-1 systemd[1]: Stopping Network Manager...
Oct 10 09:37:44 compute-1 NetworkManager[3951]: <info>  [1760089064.9235] caught SIGTERM, shutting down normally.
Oct 10 09:37:44 compute-1 NetworkManager[3951]: <info>  [1760089064.9248] dhcp4 (eth0): canceled DHCP transaction
Oct 10 09:37:44 compute-1 NetworkManager[3951]: <info>  [1760089064.9248] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 10 09:37:44 compute-1 NetworkManager[3951]: <info>  [1760089064.9248] dhcp4 (eth0): state changed no lease
Oct 10 09:37:44 compute-1 NetworkManager[3951]: <info>  [1760089064.9251] manager: NetworkManager state is now CONNECTED_SITE
Oct 10 09:37:44 compute-1 NetworkManager[3951]: <info>  [1760089064.9297] exiting (success)
Oct 10 09:37:44 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 10 09:37:44 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 10 09:37:44 compute-1 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct 10 09:37:44 compute-1 systemd[1]: Stopped Network Manager.
Oct 10 09:37:44 compute-1 systemd[1]: NetworkManager.service: Consumed 10.557s CPU time, 4.1M memory peak, read 0B from disk, written 21.0K to disk.
Oct 10 09:37:44 compute-1 systemd[1]: Starting Network Manager...
Oct 10 09:37:44 compute-1 NetworkManager[44982]: <info>  [1760089064.9976] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:fb56a8ec-12f4-4a91-b74d-e8ffc8e6ce0c)
Oct 10 09:37:44 compute-1 NetworkManager[44982]: <info>  [1760089064.9976] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0032] manager[0x5562e6775090]: monitoring kernel firmware directory '/lib/firmware'.
Oct 10 09:37:45 compute-1 systemd[1]: Starting Hostname Service...
Oct 10 09:37:45 compute-1 systemd[1]: Started Hostname Service.
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0803] hostname: hostname: using hostnamed
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0803] hostname: static hostname changed from (none) to "compute-1"
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0808] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0813] manager[0x5562e6775090]: rfkill: Wi-Fi hardware radio set enabled
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0813] manager[0x5562e6775090]: rfkill: WWAN hardware radio set enabled
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0831] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0839] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0839] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0840] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0840] manager: Networking is enabled by state file
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0841] settings: Loaded settings plugin: keyfile (internal)
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0844] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0865] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0872] dhcp: init: Using DHCP client 'internal'
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0874] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0878] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0881] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0886] device (lo): Activation: starting connection 'lo' (da285bad-fb13-45e9-93ce-582789837c7a)
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0891] device (eth0): carrier: link connected
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0894] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0897] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0898] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0903] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0907] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0911] device (eth1): carrier: link connected
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0914] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0917] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (8d1fd0d1-71da-5534-9141-6178f63cc684) (indicated)
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0917] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0921] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0925] device (eth1): Activation: starting connection 'ci-private-network' (8d1fd0d1-71da-5534-9141-6178f63cc684)
Oct 10 09:37:45 compute-1 systemd[1]: Started Network Manager.
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0934] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0946] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0950] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0953] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0957] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0962] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0967] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0969] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0973] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0978] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0980] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.0987] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.1000] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.1008] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.1011] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.1017] device (lo): Activation: successful, device activated.
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.1044] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.1045] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.1047] manager: NetworkManager state is now CONNECTED_LOCAL
Oct 10 09:37:45 compute-1 NetworkManager[44982]: <info>  [1760089065.1050] device (eth1): Activation: successful, device activated.
Oct 10 09:37:45 compute-1 systemd[1]: Starting Network Manager Wait Online...
Oct 10 09:37:45 compute-1 sudo[44966]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:45 compute-1 sudo[45173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvltakngsvrpjrovlkrnastjmpmfyrfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089065.3251894-465-221121030332920/AnsiballZ_dnf.py'
Oct 10 09:37:45 compute-1 sudo[45173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:45 compute-1 python3.9[45175]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:37:46 compute-1 NetworkManager[44982]: <info>  [1760089066.6189] dhcp4 (eth0): state changed new lease, address=38.102.83.20
Oct 10 09:37:46 compute-1 NetworkManager[44982]: <info>  [1760089066.6200] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 10 09:37:46 compute-1 NetworkManager[44982]: <info>  [1760089066.6716] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct 10 09:37:46 compute-1 NetworkManager[44982]: <info>  [1760089066.6747] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct 10 09:37:46 compute-1 NetworkManager[44982]: <info>  [1760089066.6748] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct 10 09:37:46 compute-1 NetworkManager[44982]: <info>  [1760089066.6752] manager: NetworkManager state is now CONNECTED_SITE
Oct 10 09:37:46 compute-1 NetworkManager[44982]: <info>  [1760089066.6759] device (eth0): Activation: successful, device activated.
Oct 10 09:37:46 compute-1 NetworkManager[44982]: <info>  [1760089066.6768] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 10 09:37:46 compute-1 NetworkManager[44982]: <info>  [1760089066.6772] manager: startup complete
Oct 10 09:37:46 compute-1 systemd[1]: Finished Network Manager Wait Online.
Oct 10 09:37:52 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 10 09:37:52 compute-1 systemd[1]: Starting man-db-cache-update.service...
Oct 10 09:37:52 compute-1 systemd[1]: Reloading.
Oct 10 09:37:53 compute-1 systemd-sysv-generator[45249]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:37:53 compute-1 systemd-rc-local-generator[45245]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:37:53 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 10 09:37:53 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 10 09:37:53 compute-1 systemd[1]: Finished man-db-cache-update.service.
Oct 10 09:37:54 compute-1 systemd[1]: run-r53b25801855644fda0006ea6e3872cd7.service: Deactivated successfully.
Oct 10 09:37:54 compute-1 sudo[45173]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:55 compute-1 sudo[45654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sajsrjaxnbojyigmiftcxowpfrfhwzjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089074.716878-501-81281464738141/AnsiballZ_stat.py'
Oct 10 09:37:55 compute-1 sudo[45654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:55 compute-1 python3.9[45656]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:37:55 compute-1 sudo[45654]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:56 compute-1 sudo[45806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdvngsfrwbotmenizxyoegbipfxgyaim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089075.5123665-528-94609271195517/AnsiballZ_ini_file.py'
Oct 10 09:37:56 compute-1 sudo[45806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:56 compute-1 python3.9[45808]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:37:56 compute-1 sudo[45806]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:56 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 10 09:37:57 compute-1 sudo[45960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psxenuvnvpbbtvvguvyvyruurhulfbzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089076.753248-558-4241483560031/AnsiballZ_ini_file.py'
Oct 10 09:37:57 compute-1 sudo[45960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:57 compute-1 python3.9[45962]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:37:57 compute-1 sudo[45960]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:57 compute-1 sudo[46112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-napnaxwmszntnyqnubpgfshcekpeenhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089077.488037-558-244982963169948/AnsiballZ_ini_file.py'
Oct 10 09:37:57 compute-1 sudo[46112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:58 compute-1 python3.9[46114]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:37:58 compute-1 sudo[46112]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:58 compute-1 sudo[46264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtvpyuzjrxhcogmoqppkiuxeyhmkcony ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089078.2993772-603-239071907269577/AnsiballZ_ini_file.py'
Oct 10 09:37:58 compute-1 sudo[46264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:58 compute-1 python3.9[46266]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:37:58 compute-1 sudo[46264]: pam_unix(sudo:session): session closed for user root
Oct 10 09:37:59 compute-1 sudo[46416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywnilcbkbirslszlwlklvmopvjunfqmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089078.9958944-603-226426582943391/AnsiballZ_ini_file.py'
Oct 10 09:37:59 compute-1 sudo[46416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:37:59 compute-1 python3.9[46418]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:37:59 compute-1 sudo[46416]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:00 compute-1 sudo[46568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztztfrzjbwcpyelrlcwyyzchoezxssqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089079.7816956-648-175202567655004/AnsiballZ_stat.py'
Oct 10 09:38:00 compute-1 sudo[46568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:00 compute-1 python3.9[46570]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:38:00 compute-1 sudo[46568]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:01 compute-1 sudo[46691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbuncjnmvzgkkgeqcqllnylsautsrzex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089079.7816956-648-175202567655004/AnsiballZ_copy.py'
Oct 10 09:38:01 compute-1 sudo[46691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:01 compute-1 python3.9[46693]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1760089079.7816956-648-175202567655004/.source _original_basename=.k1y5k47r follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:38:01 compute-1 sudo[46691]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:01 compute-1 sudo[46843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrddjlbkgsgpmeqykrcxanfupasqbcnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089081.478745-693-131318345287234/AnsiballZ_file.py'
Oct 10 09:38:01 compute-1 sudo[46843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:02 compute-1 python3.9[46845]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:38:02 compute-1 sudo[46843]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:02 compute-1 sudo[46995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnhgnkgvcmdaefwpzlvigvligatqnhsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089082.2441027-717-114928935233159/AnsiballZ_edpm_os_net_config_mappings.py'
Oct 10 09:38:02 compute-1 sudo[46995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:02 compute-1 python3.9[46997]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Oct 10 09:38:02 compute-1 sudo[46995]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:03 compute-1 sudo[47147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eatklzadeledyrybovgumjuforymrsqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089083.2343264-744-246055422135495/AnsiballZ_file.py'
Oct 10 09:38:03 compute-1 sudo[47147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:03 compute-1 python3.9[47149]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:38:03 compute-1 sudo[47147]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:04 compute-1 sudo[47299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnsfwspkrjwabklvylfitwpvggwmrinc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089084.1631534-774-36999702189707/AnsiballZ_stat.py'
Oct 10 09:38:04 compute-1 sudo[47299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:04 compute-1 sudo[47299]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:05 compute-1 sudo[47422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcjfhdifgacjzekmtehzzhossfjgkxgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089084.1631534-774-36999702189707/AnsiballZ_copy.py'
Oct 10 09:38:05 compute-1 sudo[47422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:05 compute-1 sudo[47422]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:06 compute-1 sudo[47574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phhayfnoxhhujjaqbnfxchpnzuevvpfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089085.66701-819-164784524067319/AnsiballZ_slurp.py'
Oct 10 09:38:06 compute-1 sudo[47574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:06 compute-1 python3.9[47576]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Oct 10 09:38:06 compute-1 sudo[47574]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:07 compute-1 sudo[47749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbxszuxfeylkdpihouqeclrosbogevjz ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089086.6225617-846-10490564304026/async_wrapper.py j666903082859 300 /home/zuul/.ansible/tmp/ansible-tmp-1760089086.6225617-846-10490564304026/AnsiballZ_edpm_os_net_config.py _'
Oct 10 09:38:07 compute-1 sudo[47749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:07 compute-1 ansible-async_wrapper.py[47751]: Invoked with j666903082859 300 /home/zuul/.ansible/tmp/ansible-tmp-1760089086.6225617-846-10490564304026/AnsiballZ_edpm_os_net_config.py _
Oct 10 09:38:07 compute-1 ansible-async_wrapper.py[47754]: Starting module and watcher
Oct 10 09:38:07 compute-1 ansible-async_wrapper.py[47754]: Start watching 47755 (300)
Oct 10 09:38:07 compute-1 ansible-async_wrapper.py[47755]: Start module (47755)
Oct 10 09:38:07 compute-1 ansible-async_wrapper.py[47751]: Return async_wrapper task started.
Oct 10 09:38:07 compute-1 sudo[47749]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:07 compute-1 python3.9[47756]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Oct 10 09:38:08 compute-1 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct 10 09:38:08 compute-1 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct 10 09:38:08 compute-1 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct 10 09:38:08 compute-1 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct 10 09:38:08 compute-1 kernel: cfg80211: failed to load regulatory.db
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.6013] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47757 uid=0 result="success"
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.6034] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47757 uid=0 result="success"
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.6632] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.6635] audit: op="connection-add" uuid="7b38e87c-8a1a-4b20-a2bb-a211382e99d6" name="br-ex-br" pid=47757 uid=0 result="success"
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.6662] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.6665] audit: op="connection-add" uuid="147e12d3-cc78-483f-8eaf-60bc5efbc6e3" name="br-ex-port" pid=47757 uid=0 result="success"
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.6691] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.6695] audit: op="connection-add" uuid="9cd64f5e-42d5-478b-bba7-3758c836d419" name="eth1-port" pid=47757 uid=0 result="success"
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.6719] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.6722] audit: op="connection-add" uuid="f3c257ab-34f8-4934-9854-2b5be39213a2" name="vlan20-port" pid=47757 uid=0 result="success"
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.6746] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.6749] audit: op="connection-add" uuid="57782c16-7171-4876-91db-7bdd0f3697df" name="vlan21-port" pid=47757 uid=0 result="success"
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.6773] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.6776] audit: op="connection-add" uuid="8975fb2c-26ed-4bb6-a672-2c9d601fef11" name="vlan22-port" pid=47757 uid=0 result="success"
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.6802] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.6806] audit: op="connection-add" uuid="b0b0efba-77b0-45c1-8154-12560bce810e" name="vlan23-port" pid=47757 uid=0 result="success"
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.6850] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.autoconnect-priority,connection.timestamp,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode,ipv6.dhcp-timeout" pid=47757 uid=0 result="success"
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.6880] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.6884] audit: op="connection-add" uuid="dcc008f2-7af8-482f-b581-a13cb582659a" name="br-ex-if" pid=47757 uid=0 result="success"
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.6929] audit: op="connection-update" uuid="8d1fd0d1-71da-5534-9141-6178f63cc684" name="ci-private-network" args="connection.port-type,connection.controller,connection.slave-type,connection.master,connection.timestamp,ovs-external-ids.data,ovs-interface.type,ipv4.addresses,ipv4.method,ipv4.routes,ipv4.never-default,ipv4.routing-rules,ipv4.dns,ipv6.addresses,ipv6.method,ipv6.routes,ipv6.addr-gen-mode,ipv6.routing-rules,ipv6.dns" pid=47757 uid=0 result="success"
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.6951] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.6953] audit: op="connection-add" uuid="0f629f31-53ad-4e29-a9f5-d581129f9c7a" name="vlan20-if" pid=47757 uid=0 result="success"
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.6975] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.6977] audit: op="connection-add" uuid="c574cdc0-53ad-4ec4-aec5-b19e5340147e" name="vlan21-if" pid=47757 uid=0 result="success"
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7001] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7003] audit: op="connection-add" uuid="e89dbb83-6269-4a8f-a897-bfe290191540" name="vlan22-if" pid=47757 uid=0 result="success"
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7024] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7026] audit: op="connection-add" uuid="080dfc91-6fae-4350-84d2-2d738b5ea6de" name="vlan23-if" pid=47757 uid=0 result="success"
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7040] audit: op="connection-delete" uuid="098c32ca-35a5-3746-add5-29d4391ea12b" name="Wired connection 1" pid=47757 uid=0 result="success"
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7054] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7067] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7071] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (7b38e87c-8a1a-4b20-a2bb-a211382e99d6)
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7072] audit: op="connection-activate" uuid="7b38e87c-8a1a-4b20-a2bb-a211382e99d6" name="br-ex-br" pid=47757 uid=0 result="success"
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7075] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7082] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7087] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (147e12d3-cc78-483f-8eaf-60bc5efbc6e3)
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7089] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7095] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7100] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (9cd64f5e-42d5-478b-bba7-3758c836d419)
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7102] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7110] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7116] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (f3c257ab-34f8-4934-9854-2b5be39213a2)
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7118] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7127] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7131] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (57782c16-7171-4876-91db-7bdd0f3697df)
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7134] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7141] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7147] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (8975fb2c-26ed-4bb6-a672-2c9d601fef11)
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7149] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7157] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7162] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (b0b0efba-77b0-45c1-8154-12560bce810e)
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7163] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7167] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7169] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7177] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7182] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7188] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (dcc008f2-7af8-482f-b581-a13cb582659a)
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7189] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7193] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7195] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7197] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7198] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7211] device (eth1): disconnecting for new activation request.
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7212] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7216] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7218] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7220] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7224] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7229] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7234] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (0f629f31-53ad-4e29-a9f5-d581129f9c7a)
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7235] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7239] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7241] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7244] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7247] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7252] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7257] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (c574cdc0-53ad-4ec4-aec5-b19e5340147e)
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7258] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7261] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7264] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7265] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7269] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7275] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7280] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (e89dbb83-6269-4a8f-a897-bfe290191540)
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7281] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7286] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7288] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7289] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7293] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7298] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7304] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (080dfc91-6fae-4350-84d2-2d738b5ea6de)
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7305] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7308] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7310] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7312] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7314] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7330] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode" pid=47757 uid=0 result="success"
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7332] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7337] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7340] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7349] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7353] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7357] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7360] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7362] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7367] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7372] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7375] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7377] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7383] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7388] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7392] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7394] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7398] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7403] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7406] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7408] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7413] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7417] dhcp4 (eth0): canceled DHCP transaction
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7417] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7418] dhcp4 (eth0): state changed no lease
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7419] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7445] audit: op="device-reapply" interface="eth1" ifindex=3 pid=47757 uid=0 result="fail" reason="Device is not activated"
Oct 10 09:38:09 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7478] dhcp4 (eth0): state changed new lease, address=38.102.83.20
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7732] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 10 09:38:09 compute-1 kernel: ovs-system: entered promiscuous mode
Oct 10 09:38:09 compute-1 kernel: Timeout policy base is empty
Oct 10 09:38:09 compute-1 systemd-udevd[47763]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7837] device (eth1): Activation: starting connection 'ci-private-network' (8d1fd0d1-71da-5534-9141-6178f63cc684)
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7842] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7847] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7856] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7862] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7865] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7872] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7878] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7883] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7884] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7885] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7887] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7889] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7890] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7893] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7900] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7904] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7908] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7913] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7916] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7920] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7923] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7928] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7931] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7935] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7938] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7942] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7945] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7950] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7954] device (eth1): state change: ip-config -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7956] device (eth1)[Open vSwitch Port]: detaching ovs interface eth1
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7956] device (eth1): released from controller device eth1
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7962] device (eth1): disconnecting for new activation request.
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7963] audit: op="connection-activate" uuid="8d1fd0d1-71da-5534-9141-6178f63cc684" name="ci-private-network" pid=47757 uid=0 result="success"
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.7967] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8031] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47757 uid=0 result="success"
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8032] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8042] device (eth1): Activation: starting connection 'ci-private-network' (8d1fd0d1-71da-5534-9141-6178f63cc684)
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8051] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8055] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8060] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8085] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8088] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8104] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8107] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8116] device (eth1): Activation: successful, device activated.
Oct 10 09:38:09 compute-1 kernel: br-ex: entered promiscuous mode
Oct 10 09:38:09 compute-1 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct 10 09:38:09 compute-1 kernel: vlan22: entered promiscuous mode
Oct 10 09:38:09 compute-1 systemd-udevd[47762]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8260] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8271] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 kernel: vlan23: entered promiscuous mode
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8317] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8322] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8330] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 10 09:38:09 compute-1 systemd-udevd[47761]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 09:38:09 compute-1 kernel: vlan20: entered promiscuous mode
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8379] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct 10 09:38:09 compute-1 kernel: vlan21: entered promiscuous mode
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8407] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8427] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8429] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8437] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8451] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8459] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8477] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8494] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8499] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8518] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8520] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8530] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8538] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8544] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8551] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8564] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8598] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8599] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 10 09:38:09 compute-1 NetworkManager[44982]: <info>  [1760089089.8604] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 10 09:38:10 compute-1 NetworkManager[44982]: <info>  [1760089090.9907] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47757 uid=0 result="success"
Oct 10 09:38:11 compute-1 sudo[48118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbmeudfwuwvebzcqfkvycqbrulzzemkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089090.7022738-846-90689265135283/AnsiballZ_async_status.py'
Oct 10 09:38:11 compute-1 sudo[48118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:11 compute-1 NetworkManager[44982]: <info>  [1760089091.2087] checkpoint[0x5562e674c950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Oct 10 09:38:11 compute-1 NetworkManager[44982]: <info>  [1760089091.2093] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47757 uid=0 result="success"
Oct 10 09:38:11 compute-1 python3.9[48120]: ansible-ansible.legacy.async_status Invoked with jid=j666903082859.47751 mode=status _async_dir=/root/.ansible_async
Oct 10 09:38:11 compute-1 sudo[48118]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:11 compute-1 NetworkManager[44982]: <info>  [1760089091.5796] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47757 uid=0 result="success"
Oct 10 09:38:11 compute-1 NetworkManager[44982]: <info>  [1760089091.5805] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47757 uid=0 result="success"
Oct 10 09:38:11 compute-1 NetworkManager[44982]: <info>  [1760089091.7684] audit: op="networking-control" arg="global-dns-configuration" pid=47757 uid=0 result="success"
Oct 10 09:38:11 compute-1 NetworkManager[44982]: <info>  [1760089091.7777] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Oct 10 09:38:11 compute-1 NetworkManager[44982]: <info>  [1760089091.7967] audit: op="networking-control" arg="global-dns-configuration" pid=47757 uid=0 result="success"
Oct 10 09:38:11 compute-1 NetworkManager[44982]: <info>  [1760089091.7990] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47757 uid=0 result="success"
Oct 10 09:38:11 compute-1 NetworkManager[44982]: <info>  [1760089091.9550] checkpoint[0x5562e674ca20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Oct 10 09:38:11 compute-1 NetworkManager[44982]: <info>  [1760089091.9555] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47757 uid=0 result="success"
Oct 10 09:38:12 compute-1 ansible-async_wrapper.py[47755]: Module complete (47755)
Oct 10 09:38:12 compute-1 ansible-async_wrapper.py[47754]: Done in kid B.
Oct 10 09:38:14 compute-1 sudo[48225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-durydufcvjabjuaekcmvvojuhrtwpllp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089090.7022738-846-90689265135283/AnsiballZ_async_status.py'
Oct 10 09:38:14 compute-1 sudo[48225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:15 compute-1 python3.9[48227]: ansible-ansible.legacy.async_status Invoked with jid=j666903082859.47751 mode=status _async_dir=/root/.ansible_async
Oct 10 09:38:15 compute-1 sudo[48225]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:15 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 10 09:38:15 compute-1 sudo[48326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeuyzssadpltahuyewtpyteqfufybqhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089090.7022738-846-90689265135283/AnsiballZ_async_status.py'
Oct 10 09:38:15 compute-1 sudo[48326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:15 compute-1 python3.9[48328]: ansible-ansible.legacy.async_status Invoked with jid=j666903082859.47751 mode=cleanup _async_dir=/root/.ansible_async
Oct 10 09:38:15 compute-1 sudo[48326]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:16 compute-1 sudo[48478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuykrryuzkljjhomjnbardkoiauglync ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089095.8391328-927-213698857352948/AnsiballZ_stat.py'
Oct 10 09:38:16 compute-1 sudo[48478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:16 compute-1 python3.9[48480]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:38:16 compute-1 sudo[48478]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:16 compute-1 sudo[48601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsvvuknmbfxolehettgjdlvglvojlbna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089095.8391328-927-213698857352948/AnsiballZ_copy.py'
Oct 10 09:38:16 compute-1 sudo[48601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:16 compute-1 python3.9[48603]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760089095.8391328-927-213698857352948/.source.returncode _original_basename=.30gys562 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:38:17 compute-1 sudo[48601]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:17 compute-1 sudo[48753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrlmimuuofutmxqteiekykpedgcqtvhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089097.257864-975-219272417300736/AnsiballZ_stat.py'
Oct 10 09:38:17 compute-1 sudo[48753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:17 compute-1 python3.9[48755]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:38:17 compute-1 sudo[48753]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:18 compute-1 sudo[48877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdgpqezclyounfdpwtrmehdfvajzqnmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089097.257864-975-219272417300736/AnsiballZ_copy.py'
Oct 10 09:38:18 compute-1 sudo[48877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:18 compute-1 python3.9[48879]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760089097.257864-975-219272417300736/.source.cfg _original_basename=.h6t_at2r follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:38:18 compute-1 sudo[48877]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:19 compute-1 sudo[49029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjgmcylsrnmabxkjvkjbnlsrokfcawtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089098.8235676-1020-76409450072045/AnsiballZ_systemd.py'
Oct 10 09:38:19 compute-1 sudo[49029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:19 compute-1 python3.9[49031]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 09:38:19 compute-1 systemd[1]: Reloading Network Manager...
Oct 10 09:38:19 compute-1 NetworkManager[44982]: <info>  [1760089099.5880] audit: op="reload" arg="0" pid=49035 uid=0 result="success"
Oct 10 09:38:19 compute-1 NetworkManager[44982]: <info>  [1760089099.5886] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Oct 10 09:38:19 compute-1 systemd[1]: Reloaded Network Manager.
Oct 10 09:38:19 compute-1 sudo[49029]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:19 compute-1 sshd-session[40979]: Connection closed by 192.168.122.30 port 40162
Oct 10 09:38:20 compute-1 sshd-session[40976]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:38:20 compute-1 systemd[1]: session-11.scope: Deactivated successfully.
Oct 10 09:38:20 compute-1 systemd[1]: session-11.scope: Consumed 52.968s CPU time.
Oct 10 09:38:20 compute-1 systemd-logind[789]: Session 11 logged out. Waiting for processes to exit.
Oct 10 09:38:20 compute-1 systemd-logind[789]: Removed session 11.
Oct 10 09:38:25 compute-1 sshd-session[49066]: Accepted publickey for zuul from 192.168.122.30 port 37440 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:38:25 compute-1 systemd-logind[789]: New session 12 of user zuul.
Oct 10 09:38:25 compute-1 systemd[1]: Started Session 12 of User zuul.
Oct 10 09:38:25 compute-1 sshd-session[49066]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:38:26 compute-1 python3.9[49219]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:38:27 compute-1 python3.9[49373]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:38:29 compute-1 python3.9[49567]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:38:29 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 10 09:38:29 compute-1 sshd-session[49069]: Connection closed by 192.168.122.30 port 37440
Oct 10 09:38:29 compute-1 sshd-session[49066]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:38:29 compute-1 systemd[1]: session-12.scope: Deactivated successfully.
Oct 10 09:38:29 compute-1 systemd[1]: session-12.scope: Consumed 2.711s CPU time.
Oct 10 09:38:29 compute-1 systemd-logind[789]: Session 12 logged out. Waiting for processes to exit.
Oct 10 09:38:29 compute-1 systemd-logind[789]: Removed session 12.
Oct 10 09:38:35 compute-1 sshd-session[49597]: Accepted publickey for zuul from 192.168.122.30 port 49038 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:38:35 compute-1 systemd-logind[789]: New session 13 of user zuul.
Oct 10 09:38:35 compute-1 systemd[1]: Started Session 13 of User zuul.
Oct 10 09:38:35 compute-1 sshd-session[49597]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:38:36 compute-1 python3.9[49750]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:38:37 compute-1 python3.9[49904]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:38:38 compute-1 sudo[50059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktlfifotgcdpahijhnbjfsuarqbtzfgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089117.8042717-81-120870111338726/AnsiballZ_setup.py'
Oct 10 09:38:38 compute-1 sudo[50059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:38 compute-1 python3.9[50061]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:38:38 compute-1 sudo[50059]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:39 compute-1 sudo[50143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnpwrgoiixtvovyffjwuvvvfqielorpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089117.8042717-81-120870111338726/AnsiballZ_dnf.py'
Oct 10 09:38:39 compute-1 sudo[50143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:39 compute-1 python3.9[50145]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:38:40 compute-1 sudo[50143]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:41 compute-1 sudo[50297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eankcgtyeiwkdmwxfymqgbxdihxjsujt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089120.7568378-117-199179358692393/AnsiballZ_setup.py'
Oct 10 09:38:41 compute-1 sudo[50297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:41 compute-1 python3.9[50299]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:38:41 compute-1 sudo[50297]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:42 compute-1 sudo[50492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcbrngcipjkxbvfutcodlhtjrotfrcdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089122.249769-150-106519504041762/AnsiballZ_file.py'
Oct 10 09:38:42 compute-1 sudo[50492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:42 compute-1 python3.9[50494]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:38:42 compute-1 sudo[50492]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:43 compute-1 sudo[50644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejbroeehnxukbvmrzxylztaqxnkqacsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089123.200779-174-230143112170172/AnsiballZ_command.py'
Oct 10 09:38:43 compute-1 sudo[50644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:43 compute-1 python3.9[50646]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:38:44 compute-1 systemd[1]: var-lib-containers-storage-overlay-compat2852767381-merged.mount: Deactivated successfully.
Oct 10 09:38:44 compute-1 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck2170325221-merged.mount: Deactivated successfully.
Oct 10 09:38:44 compute-1 podman[50647]: 2025-10-10 09:38:44.049219817 +0000 UTC m=+0.076965870 system refresh
Oct 10 09:38:44 compute-1 sudo[50644]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:44 compute-1 sudo[50806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnghqtgdkeelmyxgnnkiajuevygfqmwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089124.2955315-198-187223606016097/AnsiballZ_stat.py'
Oct 10 09:38:44 compute-1 sudo[50806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:45 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 09:38:45 compute-1 python3.9[50808]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:38:45 compute-1 sudo[50806]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:45 compute-1 sudo[50929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjzeraxeuaplvuyfswttefrrndzkrcxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089124.2955315-198-187223606016097/AnsiballZ_copy.py'
Oct 10 09:38:45 compute-1 sudo[50929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:45 compute-1 python3.9[50931]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089124.2955315-198-187223606016097/.source.json follow=False _original_basename=podman_network_config.j2 checksum=285b71da672ecff99ec1ea5d612fdfcd7171c48f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:38:45 compute-1 sudo[50929]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:46 compute-1 sudo[51081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkqlikftyctjjrlnjdaojxmqhjddzrbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089126.0165043-243-220862171723225/AnsiballZ_stat.py'
Oct 10 09:38:46 compute-1 sudo[51081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:46 compute-1 python3.9[51083]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:38:46 compute-1 sudo[51081]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:46 compute-1 sudo[51204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-folwlwqalfyrijxvtqyxjesjeegfibby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089126.0165043-243-220862171723225/AnsiballZ_copy.py'
Oct 10 09:38:46 compute-1 sudo[51204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:47 compute-1 python3.9[51206]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760089126.0165043-243-220862171723225/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:38:47 compute-1 sudo[51204]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:47 compute-1 sudo[51356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjjtglzorjrlqcbtvmfckiiwsvohldfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089127.4817019-291-141716159538342/AnsiballZ_ini_file.py'
Oct 10 09:38:47 compute-1 sudo[51356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:48 compute-1 python3.9[51358]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:38:48 compute-1 sudo[51356]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:48 compute-1 sudo[51508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdxvrmfnvddpylyzfqnzttkygzlcjttm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089128.3734236-291-111593360242281/AnsiballZ_ini_file.py'
Oct 10 09:38:48 compute-1 sudo[51508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:48 compute-1 python3.9[51510]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:38:48 compute-1 sudo[51508]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:49 compute-1 sudo[51660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mntinqnqsuzjualkisoudgvlheqydtpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089129.1583111-291-129437391237321/AnsiballZ_ini_file.py'
Oct 10 09:38:49 compute-1 sudo[51660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:49 compute-1 python3.9[51662]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:38:49 compute-1 sudo[51660]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:50 compute-1 sudo[51812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuogwxvaxykjqksypzlyjaqlyjtqkayz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089129.8779972-291-171095332477232/AnsiballZ_ini_file.py'
Oct 10 09:38:50 compute-1 sudo[51812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:50 compute-1 python3.9[51814]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:38:50 compute-1 sudo[51812]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:51 compute-1 sudo[51964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzgjadbmqqxqixxsbrzkmygaszcveufg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089130.8195705-384-113173110165778/AnsiballZ_dnf.py'
Oct 10 09:38:51 compute-1 sudo[51964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:51 compute-1 python3.9[51966]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:38:52 compute-1 sudo[51964]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:53 compute-1 sudo[52117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdyigmnhuafncmgwzzjziviigvvvahsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089133.237458-417-105774390219715/AnsiballZ_setup.py'
Oct 10 09:38:53 compute-1 sudo[52117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:53 compute-1 python3.9[52119]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:38:53 compute-1 sudo[52117]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:54 compute-1 sudo[52271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbzzvodhkwoyihxaxdjkbrqtyilzhidp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089134.231714-441-134450786415122/AnsiballZ_stat.py'
Oct 10 09:38:54 compute-1 sudo[52271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:54 compute-1 python3.9[52273]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:38:54 compute-1 sudo[52271]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:55 compute-1 sudo[52423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sypexynjzcxsechjizcxosprftqazjiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089135.1060464-468-224404985057137/AnsiballZ_stat.py'
Oct 10 09:38:55 compute-1 sudo[52423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:55 compute-1 python3.9[52425]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:38:55 compute-1 sudo[52423]: pam_unix(sudo:session): session closed for user root
Oct 10 09:38:56 compute-1 sudo[52575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzprrvgjqnzyslaccsznhcmbajjnpaem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089136.0240436-498-231523211735034/AnsiballZ_service_facts.py'
Oct 10 09:38:56 compute-1 sudo[52575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:38:56 compute-1 python3.9[52577]: ansible-service_facts Invoked
Oct 10 09:38:56 compute-1 network[52594]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 09:38:56 compute-1 network[52595]: 'network-scripts' will be removed from distribution in near future.
Oct 10 09:38:56 compute-1 network[52596]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 09:39:00 compute-1 sudo[52575]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:01 compute-1 sudo[52881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tljaniyxptdbdxycrepphspamdhnfbia ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1760089141.2294562-537-224911868962344/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1760089141.2294562-537-224911868962344/args'
Oct 10 09:39:01 compute-1 sudo[52881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:01 compute-1 sudo[52881]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:02 compute-1 sudo[53048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnxksnhcaakvddryiodxoaeowzpvkctk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089142.12738-570-180977558181247/AnsiballZ_dnf.py'
Oct 10 09:39:02 compute-1 sudo[53048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:02 compute-1 python3.9[53050]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:39:03 compute-1 sudo[53048]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:05 compute-1 sudo[53201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwwafxokfseflhhaafaajkwnqeqvygiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089144.465067-609-136726490707655/AnsiballZ_package_facts.py'
Oct 10 09:39:05 compute-1 sudo[53201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:05 compute-1 python3.9[53203]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 10 09:39:05 compute-1 sudo[53201]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:07 compute-1 sudo[53353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djmzgjlkfkfqsymeowlmoiwixtybzftw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089146.6320033-640-195135685332680/AnsiballZ_stat.py'
Oct 10 09:39:07 compute-1 sudo[53353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:07 compute-1 python3.9[53355]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:39:07 compute-1 sudo[53353]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:07 compute-1 sudo[53478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sujylxegjdnxvenmenqvmkgtzxiryewd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089146.6320033-640-195135685332680/AnsiballZ_copy.py'
Oct 10 09:39:07 compute-1 sudo[53478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:07 compute-1 python3.9[53480]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760089146.6320033-640-195135685332680/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:39:08 compute-1 sudo[53478]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:08 compute-1 sudo[53632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzilacrjcrvqcnfwqkdrapiqjmwndmdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089148.227183-685-137583090722969/AnsiballZ_stat.py'
Oct 10 09:39:08 compute-1 sudo[53632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:08 compute-1 python3.9[53634]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:39:08 compute-1 sudo[53632]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:09 compute-1 sudo[53757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbqhobksnytrtssjmljwnrhrrjdzgkxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089148.227183-685-137583090722969/AnsiballZ_copy.py'
Oct 10 09:39:09 compute-1 sudo[53757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:09 compute-1 python3.9[53759]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760089148.227183-685-137583090722969/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:39:09 compute-1 sudo[53757]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:10 compute-1 sudo[53911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxhofurzpmeyvpjfrbstrncwirolsqfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089150.4408154-747-65781032403126/AnsiballZ_lineinfile.py'
Oct 10 09:39:10 compute-1 sudo[53911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:11 compute-1 python3.9[53913]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:39:11 compute-1 sudo[53911]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:12 compute-1 sudo[54065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhzmyvutoyoptyfuicjbachtgcfzqpoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089152.1975267-793-16366117908562/AnsiballZ_setup.py'
Oct 10 09:39:12 compute-1 sudo[54065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:12 compute-1 python3.9[54067]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:39:13 compute-1 sudo[54065]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:13 compute-1 sudo[54149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phrcpkyvopozkveputgwgwknrtmiiwfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089152.1975267-793-16366117908562/AnsiballZ_systemd.py'
Oct 10 09:39:13 compute-1 sudo[54149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:14 compute-1 python3.9[54151]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:39:14 compute-1 sudo[54149]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:16 compute-1 sudo[54303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqwlviltibyblnlmvobphxkgcmccvzqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089156.0305657-841-63263192384298/AnsiballZ_setup.py'
Oct 10 09:39:16 compute-1 sudo[54303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:16 compute-1 python3.9[54305]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:39:16 compute-1 sudo[54303]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:17 compute-1 sudo[54387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdovvxygpwpzhlmrzejxjgpgdsfqvlhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089156.0305657-841-63263192384298/AnsiballZ_systemd.py'
Oct 10 09:39:17 compute-1 sudo[54387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:17 compute-1 python3.9[54389]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 09:39:17 compute-1 chronyd[795]: chronyd exiting
Oct 10 09:39:17 compute-1 systemd[1]: Stopping NTP client/server...
Oct 10 09:39:17 compute-1 systemd[1]: chronyd.service: Deactivated successfully.
Oct 10 09:39:17 compute-1 systemd[1]: Stopped NTP client/server.
Oct 10 09:39:17 compute-1 systemd[1]: Starting NTP client/server...
Oct 10 09:39:17 compute-1 chronyd[54397]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 10 09:39:17 compute-1 chronyd[54397]: Frequency -26.925 +/- 0.209 ppm read from /var/lib/chrony/drift
Oct 10 09:39:17 compute-1 chronyd[54397]: Loaded seccomp filter (level 2)
Oct 10 09:39:17 compute-1 systemd[1]: Started NTP client/server.
Oct 10 09:39:17 compute-1 sudo[54387]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:18 compute-1 sshd-session[49600]: Connection closed by 192.168.122.30 port 49038
Oct 10 09:39:18 compute-1 sshd-session[49597]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:39:18 compute-1 systemd[1]: session-13.scope: Deactivated successfully.
Oct 10 09:39:18 compute-1 systemd[1]: session-13.scope: Consumed 29.667s CPU time.
Oct 10 09:39:18 compute-1 systemd-logind[789]: Session 13 logged out. Waiting for processes to exit.
Oct 10 09:39:18 compute-1 systemd-logind[789]: Removed session 13.
Oct 10 09:39:24 compute-1 sshd-session[54423]: Accepted publickey for zuul from 192.168.122.30 port 38002 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:39:24 compute-1 systemd-logind[789]: New session 14 of user zuul.
Oct 10 09:39:24 compute-1 systemd[1]: Started Session 14 of User zuul.
Oct 10 09:39:24 compute-1 sshd-session[54423]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:39:24 compute-1 sudo[54576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfzpffgifzmbwhtfttqlxxetvwpxzgpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089164.2397876-27-241401530909047/AnsiballZ_file.py'
Oct 10 09:39:24 compute-1 sudo[54576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:24 compute-1 python3.9[54578]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:39:24 compute-1 sudo[54576]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:25 compute-1 sudo[54728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-putwowssjqmazoumywgzbkvfvcleveln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089165.1204658-63-11523828820517/AnsiballZ_stat.py'
Oct 10 09:39:25 compute-1 sudo[54728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:25 compute-1 python3.9[54730]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:39:25 compute-1 sudo[54728]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:26 compute-1 sudo[54851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqzzbeocjbosalafigximyburmezdrxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089165.1204658-63-11523828820517/AnsiballZ_copy.py'
Oct 10 09:39:26 compute-1 sudo[54851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:26 compute-1 python3.9[54853]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760089165.1204658-63-11523828820517/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:39:26 compute-1 sudo[54851]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:26 compute-1 sshd-session[54426]: Connection closed by 192.168.122.30 port 38002
Oct 10 09:39:26 compute-1 sshd-session[54423]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:39:26 compute-1 systemd[1]: session-14.scope: Deactivated successfully.
Oct 10 09:39:26 compute-1 systemd[1]: session-14.scope: Consumed 1.741s CPU time.
Oct 10 09:39:26 compute-1 systemd-logind[789]: Session 14 logged out. Waiting for processes to exit.
Oct 10 09:39:26 compute-1 systemd-logind[789]: Removed session 14.
Oct 10 09:39:32 compute-1 sshd-session[54878]: Accepted publickey for zuul from 192.168.122.30 port 38010 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:39:32 compute-1 systemd-logind[789]: New session 15 of user zuul.
Oct 10 09:39:32 compute-1 systemd[1]: Started Session 15 of User zuul.
Oct 10 09:39:32 compute-1 sshd-session[54878]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:39:33 compute-1 python3.9[55031]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:39:34 compute-1 sudo[55185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anyyeukhokzrvsvdldvxgwppuikaohsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089173.83963-60-248280571892999/AnsiballZ_file.py'
Oct 10 09:39:34 compute-1 sudo[55185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:34 compute-1 python3.9[55187]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:39:34 compute-1 sudo[55185]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:35 compute-1 sudo[55360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uljwuxisdikxpinqhvcibitlvtfsphfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089174.755317-84-203807779769119/AnsiballZ_stat.py'
Oct 10 09:39:35 compute-1 sudo[55360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:35 compute-1 python3.9[55362]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:39:35 compute-1 sudo[55360]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:36 compute-1 sudo[55483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmtcdmkgcbfnucneuhcxqucihswfvpfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089174.755317-84-203807779769119/AnsiballZ_copy.py'
Oct 10 09:39:36 compute-1 sudo[55483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:36 compute-1 python3.9[55485]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1760089174.755317-84-203807779769119/.source.json _original_basename=.60nuqqxp follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:39:36 compute-1 sudo[55483]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:37 compute-1 sudo[55635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgofqrssupldgatnaqsqbdzbvhlaukmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089176.8322256-153-8379793235791/AnsiballZ_stat.py'
Oct 10 09:39:37 compute-1 sudo[55635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:37 compute-1 python3.9[55637]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:39:37 compute-1 sudo[55635]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:37 compute-1 sudo[55758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faufskdcncrqulgxeuzpmwejsqdrgzcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089176.8322256-153-8379793235791/AnsiballZ_copy.py'
Oct 10 09:39:37 compute-1 sudo[55758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:38 compute-1 python3.9[55760]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760089176.8322256-153-8379793235791/.source _original_basename=.z3qttai8 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:39:38 compute-1 sudo[55758]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:38 compute-1 sudo[55910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulzwxbdjskipozgcoqdvyoosppdpzmun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089178.3616579-201-269538421019850/AnsiballZ_file.py'
Oct 10 09:39:38 compute-1 sudo[55910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:38 compute-1 python3.9[55912]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:39:38 compute-1 sudo[55910]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:39 compute-1 sudo[56062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtzgscodornpccxsenhaohliuhvljcyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089179.2261026-225-2590318932538/AnsiballZ_stat.py'
Oct 10 09:39:39 compute-1 sudo[56062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:39 compute-1 python3.9[56064]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:39:39 compute-1 sudo[56062]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:40 compute-1 sudo[56185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnabcvhrfzxnhrzpeartfiybjywuczxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089179.2261026-225-2590318932538/AnsiballZ_copy.py'
Oct 10 09:39:40 compute-1 sudo[56185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:40 compute-1 python3.9[56187]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760089179.2261026-225-2590318932538/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:39:40 compute-1 sudo[56185]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:41 compute-1 sudo[56337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ettixesmdkqyneltgvunyjldrfiagkni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089180.6732674-225-248432976007820/AnsiballZ_stat.py'
Oct 10 09:39:41 compute-1 sudo[56337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:41 compute-1 python3.9[56339]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:39:41 compute-1 sudo[56337]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:41 compute-1 sudo[56460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfnvofwpvznkteftfjynlruamsmynrkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089180.6732674-225-248432976007820/AnsiballZ_copy.py'
Oct 10 09:39:41 compute-1 sudo[56460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:41 compute-1 python3.9[56462]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760089180.6732674-225-248432976007820/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:39:41 compute-1 sudo[56460]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:42 compute-1 sudo[56612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyibfdjdgodxlwakxhsaqulahsfqxtzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089182.1484709-312-100341661780262/AnsiballZ_file.py'
Oct 10 09:39:42 compute-1 sudo[56612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:42 compute-1 python3.9[56614]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:39:42 compute-1 sudo[56612]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:43 compute-1 sudo[56764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iraxffcafmohtlovtihtdgyhvgwvwwfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089182.958253-336-97552927995313/AnsiballZ_stat.py'
Oct 10 09:39:43 compute-1 sudo[56764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:43 compute-1 python3.9[56766]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:39:43 compute-1 sudo[56764]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:43 compute-1 sudo[56887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wboqlposrhqyeayuyionoierjfivjinq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089182.958253-336-97552927995313/AnsiballZ_copy.py'
Oct 10 09:39:43 compute-1 sudo[56887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:44 compute-1 python3.9[56889]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089182.958253-336-97552927995313/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:39:44 compute-1 sudo[56887]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:44 compute-1 sudo[57039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niwjxodpidhfgipjkcptccrybbghkjhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089184.332451-382-152919058965498/AnsiballZ_stat.py'
Oct 10 09:39:44 compute-1 sudo[57039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:44 compute-1 python3.9[57041]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:39:44 compute-1 sudo[57039]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:45 compute-1 sudo[57162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bratwgznlveoxlsiwegsiyvtqgjlojaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089184.332451-382-152919058965498/AnsiballZ_copy.py'
Oct 10 09:39:45 compute-1 sudo[57162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:45 compute-1 python3.9[57164]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089184.332451-382-152919058965498/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:39:45 compute-1 sudo[57162]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:46 compute-1 sudo[57314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmjercdeyuczzyxxjcvhoqhiksxqmvps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089185.7213862-426-86742151484174/AnsiballZ_systemd.py'
Oct 10 09:39:46 compute-1 sudo[57314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:46 compute-1 python3.9[57316]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:39:46 compute-1 systemd[1]: Reloading.
Oct 10 09:39:46 compute-1 systemd-rc-local-generator[57337]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:39:46 compute-1 systemd-sysv-generator[57343]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:39:46 compute-1 systemd[1]: Reloading.
Oct 10 09:39:47 compute-1 systemd-rc-local-generator[57379]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:39:47 compute-1 systemd-sysv-generator[57384]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:39:47 compute-1 systemd[1]: Starting EDPM Container Shutdown...
Oct 10 09:39:47 compute-1 systemd[1]: Finished EDPM Container Shutdown.
Oct 10 09:39:47 compute-1 sudo[57314]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:47 compute-1 sudo[57541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orwznlqwpmkkphqpguumqatnhfgxsmwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089187.4173167-450-258700906599969/AnsiballZ_stat.py'
Oct 10 09:39:47 compute-1 sudo[57541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:47 compute-1 python3.9[57543]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:39:47 compute-1 sudo[57541]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:48 compute-1 sudo[57664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vveftmahhlqoddzrrgchqghcxzdeqxpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089187.4173167-450-258700906599969/AnsiballZ_copy.py'
Oct 10 09:39:48 compute-1 sudo[57664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:48 compute-1 python3.9[57666]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089187.4173167-450-258700906599969/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:39:48 compute-1 sudo[57664]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:49 compute-1 sudo[57816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpxffoflpcqwvdyzlksfbsjuqviburso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089188.8319273-495-96657470353741/AnsiballZ_stat.py'
Oct 10 09:39:49 compute-1 sudo[57816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:49 compute-1 python3.9[57818]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:39:49 compute-1 sudo[57816]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:49 compute-1 sudo[57939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onhsmuhqhjwinttaqztxfmeugpqpjhva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089188.8319273-495-96657470353741/AnsiballZ_copy.py'
Oct 10 09:39:49 compute-1 sudo[57939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:50 compute-1 python3.9[57941]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089188.8319273-495-96657470353741/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:39:50 compute-1 sudo[57939]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:50 compute-1 sudo[58091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjdrsfojqaymmawvddqvbisycvtrecli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089190.2239294-540-172618330623984/AnsiballZ_systemd.py'
Oct 10 09:39:50 compute-1 sudo[58091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:51 compute-1 python3.9[58093]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:39:51 compute-1 systemd[1]: Reloading.
Oct 10 09:39:51 compute-1 systemd-rc-local-generator[58124]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:39:51 compute-1 systemd-sysv-generator[58127]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:39:52 compute-1 systemd[1]: Reloading.
Oct 10 09:39:52 compute-1 systemd-sysv-generator[58158]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:39:52 compute-1 systemd-rc-local-generator[58153]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:39:52 compute-1 systemd[1]: Starting Create netns directory...
Oct 10 09:39:52 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 10 09:39:52 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 10 09:39:52 compute-1 systemd[1]: Finished Create netns directory.
Oct 10 09:39:52 compute-1 sudo[58091]: pam_unix(sudo:session): session closed for user root
Oct 10 09:39:53 compute-1 python3.9[58322]: ansible-ansible.builtin.service_facts Invoked
Oct 10 09:39:53 compute-1 network[58339]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 09:39:53 compute-1 network[58340]: 'network-scripts' will be removed from distribution in near future.
Oct 10 09:39:53 compute-1 network[58341]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 09:39:59 compute-1 sudo[58603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukaoyhxtqcznweqxjflhpjpkuzaqavxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089198.7076614-588-302936087070/AnsiballZ_systemd.py'
Oct 10 09:39:59 compute-1 sudo[58603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:39:59 compute-1 python3.9[58605]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:39:59 compute-1 systemd[1]: Reloading.
Oct 10 09:39:59 compute-1 systemd-sysv-generator[58639]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:39:59 compute-1 systemd-rc-local-generator[58634]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:39:59 compute-1 systemd[1]: Stopping IPv4 firewall with iptables...
Oct 10 09:39:59 compute-1 iptables.init[58645]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Oct 10 09:40:00 compute-1 iptables.init[58645]: iptables: Flushing firewall rules: [  OK  ]
Oct 10 09:40:00 compute-1 systemd[1]: iptables.service: Deactivated successfully.
Oct 10 09:40:00 compute-1 systemd[1]: Stopped IPv4 firewall with iptables.
Oct 10 09:40:00 compute-1 sudo[58603]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:00 compute-1 sudo[58839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bupzhckelznihazthfzyeyyjmtzfkwbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089200.341304-588-264435036077186/AnsiballZ_systemd.py'
Oct 10 09:40:00 compute-1 sudo[58839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:00 compute-1 python3.9[58841]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:40:01 compute-1 sudo[58839]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:01 compute-1 sudo[58993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aficvqdvsukkhtqepfnbapjafrlluyui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089201.3647609-636-247622108460768/AnsiballZ_systemd.py'
Oct 10 09:40:01 compute-1 sudo[58993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:01 compute-1 python3.9[58995]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:40:02 compute-1 systemd[1]: Reloading.
Oct 10 09:40:02 compute-1 systemd-rc-local-generator[59023]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:40:02 compute-1 systemd-sysv-generator[59027]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:40:02 compute-1 systemd[1]: Starting Netfilter Tables...
Oct 10 09:40:02 compute-1 systemd[1]: Finished Netfilter Tables.
Oct 10 09:40:02 compute-1 sudo[58993]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:03 compute-1 sudo[59186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyfewnbrdwwlkxkvhrwzrffulxzsteih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089202.8201485-660-55547358300701/AnsiballZ_command.py'
Oct 10 09:40:03 compute-1 sudo[59186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:03 compute-1 python3.9[59188]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:40:03 compute-1 sudo[59186]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:04 compute-1 sudo[59339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzgozttagtxgwpzyjzqfufjvkyijacwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089204.0707862-702-138017960649119/AnsiballZ_stat.py'
Oct 10 09:40:04 compute-1 sudo[59339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:04 compute-1 python3.9[59341]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:40:04 compute-1 sudo[59339]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:05 compute-1 sudo[59464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrdvjycgxakemzkiyqndhtsfjmpsxigg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089204.0707862-702-138017960649119/AnsiballZ_copy.py'
Oct 10 09:40:05 compute-1 sudo[59464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:05 compute-1 python3.9[59466]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760089204.0707862-702-138017960649119/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:40:05 compute-1 sudo[59464]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:06 compute-1 python3.9[59617]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 09:40:06 compute-1 polkitd[6374]: Registered Authentication Agent for unix-process:59619:212001 (system bus name :1.525 [/usr/bin/pkttyagent --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 10 09:40:31 compute-1 polkitd[6374]: Unregistered Authentication Agent for unix-process:59619:212001 (system bus name :1.525, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 10 09:40:31 compute-1 polkit-agent-helper-1[59631]: pam_unix(polkit-1:auth): conversation failed
Oct 10 09:40:31 compute-1 polkit-agent-helper-1[59631]: pam_unix(polkit-1:auth): auth could not identify password for [root]
Oct 10 09:40:31 compute-1 polkitd[6374]: Operator of unix-process:59619:212001 FAILED to authenticate to gain authorization for action org.freedesktop.systemd1.manage-units for system-bus-name::1.524 [<unknown>] (owned by unix-user:zuul)
Oct 10 09:40:31 compute-1 sshd-session[54881]: Connection closed by 192.168.122.30 port 38010
Oct 10 09:40:31 compute-1 sshd-session[54878]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:40:31 compute-1 systemd[1]: session-15.scope: Deactivated successfully.
Oct 10 09:40:31 compute-1 systemd[1]: session-15.scope: Consumed 23.750s CPU time.
Oct 10 09:40:31 compute-1 systemd-logind[789]: Session 15 logged out. Waiting for processes to exit.
Oct 10 09:40:31 compute-1 systemd-logind[789]: Removed session 15.
Oct 10 09:40:43 compute-1 sshd-session[59657]: Accepted publickey for zuul from 192.168.122.30 port 38696 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:40:44 compute-1 systemd-logind[789]: New session 16 of user zuul.
Oct 10 09:40:44 compute-1 systemd[1]: Started Session 16 of User zuul.
Oct 10 09:40:44 compute-1 sshd-session[59657]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:40:45 compute-1 python3.9[59810]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:40:46 compute-1 sudo[59964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmqvtvqerftanjbroqghdnqrwiogkssf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089245.5722446-60-18256767476451/AnsiballZ_file.py'
Oct 10 09:40:46 compute-1 sudo[59964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:46 compute-1 python3.9[59966]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:40:46 compute-1 sudo[59964]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:46 compute-1 sudo[60139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sftdywagszopexppibtwojapdlgggcto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089246.4921124-84-84041055577343/AnsiballZ_stat.py'
Oct 10 09:40:46 compute-1 sudo[60139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:47 compute-1 python3.9[60141]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:40:47 compute-1 sudo[60139]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:47 compute-1 sudo[60217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubqncokqgfnzlfwtdyjlvvmhzicdswox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089246.4921124-84-84041055577343/AnsiballZ_file.py'
Oct 10 09:40:47 compute-1 sudo[60217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:47 compute-1 python3.9[60219]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.akvl8ywv recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:40:47 compute-1 sudo[60217]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:48 compute-1 sudo[60369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qovmoclynvzxsdtrkeqwffzcpfmzuxcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089248.3022902-144-264951309734814/AnsiballZ_stat.py'
Oct 10 09:40:48 compute-1 sudo[60369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:48 compute-1 python3.9[60371]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:40:48 compute-1 sudo[60369]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:49 compute-1 sudo[60447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzpsaeihxbrisgnsnntemxymrjctwjwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089248.3022902-144-264951309734814/AnsiballZ_file.py'
Oct 10 09:40:49 compute-1 sudo[60447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:49 compute-1 python3.9[60449]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.axm96psf recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:40:49 compute-1 sudo[60447]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:50 compute-1 sudo[60599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gptxchxpzzsjmssdjacyirmayqddyfro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089249.7054546-183-216651015332387/AnsiballZ_file.py'
Oct 10 09:40:50 compute-1 sudo[60599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:50 compute-1 python3.9[60601]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:40:50 compute-1 sudo[60599]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:50 compute-1 sudo[60751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmkaczohzyerymyebygdceyazpbwgcny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089250.511471-207-18623532844583/AnsiballZ_stat.py'
Oct 10 09:40:50 compute-1 sudo[60751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:51 compute-1 python3.9[60753]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:40:51 compute-1 sudo[60751]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:51 compute-1 sudo[60829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvsaoesosicdkaetsbpqmitrsdywbagx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089250.511471-207-18623532844583/AnsiballZ_file.py'
Oct 10 09:40:51 compute-1 sudo[60829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:51 compute-1 python3.9[60831]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:40:51 compute-1 sudo[60829]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:52 compute-1 sudo[60981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzphyodgxchfgrbpxpwknzgginayhvon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089251.825555-207-135184463379114/AnsiballZ_stat.py'
Oct 10 09:40:52 compute-1 sudo[60981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:52 compute-1 python3.9[60983]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:40:52 compute-1 sudo[60981]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:52 compute-1 sudo[61059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxaftohobcipciyntifwunagemwizwwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089251.825555-207-135184463379114/AnsiballZ_file.py'
Oct 10 09:40:52 compute-1 sudo[61059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:52 compute-1 python3.9[61061]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:40:52 compute-1 sudo[61059]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:53 compute-1 sudo[61211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njadivjkyjkrcoxqsioehwpayrzqyuql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089253.0536516-276-211433699066059/AnsiballZ_file.py'
Oct 10 09:40:53 compute-1 sudo[61211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:53 compute-1 python3.9[61213]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:40:53 compute-1 sudo[61211]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:54 compute-1 sudo[61363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aujwixfwsbmxdvkcnbtnqgnrwfwibbze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089253.8781772-300-236634842188794/AnsiballZ_stat.py'
Oct 10 09:40:54 compute-1 sudo[61363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:54 compute-1 python3.9[61365]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:40:54 compute-1 sudo[61363]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:54 compute-1 sudo[61441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muhyhxayrqvatugxzkhqgkxqzomyejks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089253.8781772-300-236634842188794/AnsiballZ_file.py'
Oct 10 09:40:54 compute-1 sudo[61441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:54 compute-1 python3.9[61443]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:40:54 compute-1 sudo[61441]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:55 compute-1 sudo[61593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ximafcvhytujwufnyrsjboqospjbnqqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089255.1432822-336-16572001333546/AnsiballZ_stat.py'
Oct 10 09:40:55 compute-1 sudo[61593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:55 compute-1 python3.9[61595]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:40:55 compute-1 sudo[61593]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:55 compute-1 sudo[61671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpfpubpzxoncbwymtlfpwroghomccybi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089255.1432822-336-16572001333546/AnsiballZ_file.py'
Oct 10 09:40:55 compute-1 sudo[61671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:56 compute-1 python3.9[61673]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:40:56 compute-1 sudo[61671]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:57 compute-1 sudo[61823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbhywkqhojybwfexhehxxnrhvpidafth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089256.4227266-372-129810149158562/AnsiballZ_systemd.py'
Oct 10 09:40:57 compute-1 sudo[61823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:57 compute-1 python3.9[61825]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:40:57 compute-1 systemd[1]: Reloading.
Oct 10 09:40:57 compute-1 systemd-rc-local-generator[61852]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:40:57 compute-1 systemd-sysv-generator[61856]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:40:57 compute-1 sudo[61823]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:58 compute-1 sudo[62012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sosaojvqzoyafknztaokquhbftmzgeam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089258.0078297-396-105333271715671/AnsiballZ_stat.py'
Oct 10 09:40:58 compute-1 sudo[62012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:58 compute-1 python3.9[62014]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:40:58 compute-1 sudo[62012]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:58 compute-1 sudo[62090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frwirnuzzpalcpcrsilrtqdlvteuspsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089258.0078297-396-105333271715671/AnsiballZ_file.py'
Oct 10 09:40:58 compute-1 sudo[62090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:40:59 compute-1 python3.9[62092]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:40:59 compute-1 sudo[62090]: pam_unix(sudo:session): session closed for user root
Oct 10 09:40:59 compute-1 sudo[62242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voobuejjcoizunrbwrgkqoqlljrpxeuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089259.4060874-432-272347557807529/AnsiballZ_stat.py'
Oct 10 09:40:59 compute-1 sudo[62242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:00 compute-1 python3.9[62244]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:41:00 compute-1 sudo[62242]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:00 compute-1 sudo[62320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxnkjpvqfyxopkkakvlexoibtaarzjby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089259.4060874-432-272347557807529/AnsiballZ_file.py'
Oct 10 09:41:00 compute-1 sudo[62320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:00 compute-1 python3.9[62322]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:00 compute-1 sudo[62320]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:01 compute-1 sudo[62472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vflfreuljdjencvgqrpejsqseauzmlwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089260.810706-468-23942076535516/AnsiballZ_systemd.py'
Oct 10 09:41:01 compute-1 sudo[62472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:01 compute-1 python3.9[62474]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:41:01 compute-1 systemd[1]: Reloading.
Oct 10 09:41:01 compute-1 systemd-rc-local-generator[62504]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:41:01 compute-1 systemd-sysv-generator[62508]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:41:01 compute-1 systemd[1]: Starting Create netns directory...
Oct 10 09:41:01 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 10 09:41:01 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 10 09:41:01 compute-1 systemd[1]: Finished Create netns directory.
Oct 10 09:41:01 compute-1 sudo[62472]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:02 compute-1 python3.9[62666]: ansible-ansible.builtin.service_facts Invoked
Oct 10 09:41:02 compute-1 network[62683]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 09:41:02 compute-1 network[62684]: 'network-scripts' will be removed from distribution in near future.
Oct 10 09:41:02 compute-1 network[62685]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 09:41:07 compute-1 sudo[62946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hamlqgkdnuxobkjmgmpdfgpmlyolgdth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089267.2410893-546-111062278261541/AnsiballZ_stat.py'
Oct 10 09:41:07 compute-1 sudo[62946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:07 compute-1 python3.9[62948]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:41:07 compute-1 sudo[62946]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:08 compute-1 sudo[63024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btkunnyrhgxtxghakrxqarkstuimconv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089267.2410893-546-111062278261541/AnsiballZ_file.py'
Oct 10 09:41:08 compute-1 sudo[63024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:08 compute-1 python3.9[63026]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:08 compute-1 sudo[63024]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:09 compute-1 sudo[63176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkcnrtzjukdemhnpxsvkrgypdahawwul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089268.7217073-585-163270084537863/AnsiballZ_file.py'
Oct 10 09:41:09 compute-1 sudo[63176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:09 compute-1 python3.9[63178]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:09 compute-1 sudo[63176]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:09 compute-1 sudo[63328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quoyotvnnlywrxusalzkfzwzozwjycpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089269.5123215-609-199794824050836/AnsiballZ_stat.py'
Oct 10 09:41:09 compute-1 sudo[63328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:10 compute-1 python3.9[63330]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:41:10 compute-1 sudo[63328]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:10 compute-1 sudo[63451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nncojhbbktuluewevblqzdtltwhjvcqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089269.5123215-609-199794824050836/AnsiballZ_copy.py'
Oct 10 09:41:10 compute-1 sudo[63451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:10 compute-1 python3.9[63453]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089269.5123215-609-199794824050836/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:10 compute-1 sudo[63451]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:11 compute-1 sudo[63603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmcqvowwzcuahkcmojkssjbobreaxubu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089271.1915593-664-38033983760104/AnsiballZ_timezone.py'
Oct 10 09:41:11 compute-1 sudo[63603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:11 compute-1 python3.9[63605]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 10 09:41:11 compute-1 systemd[1]: Starting Time & Date Service...
Oct 10 09:41:12 compute-1 systemd[1]: Started Time & Date Service.
Oct 10 09:41:12 compute-1 sudo[63603]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:12 compute-1 sudo[63759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ionuhebbhtwbwwetwznsikxohnjtgtla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089272.6166973-690-39220631095470/AnsiballZ_file.py'
Oct 10 09:41:12 compute-1 sudo[63759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:13 compute-1 python3.9[63761]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:13 compute-1 sudo[63759]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:13 compute-1 sudo[63911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rorhhthicixhjpgbqwxwaejvzucetpsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089273.3690977-714-61199807161832/AnsiballZ_stat.py'
Oct 10 09:41:13 compute-1 sudo[63911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:13 compute-1 python3.9[63913]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:41:13 compute-1 sudo[63911]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:14 compute-1 sudo[64034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgzygyzanbqspfmzszmiyyzrhltdekya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089273.3690977-714-61199807161832/AnsiballZ_copy.py'
Oct 10 09:41:14 compute-1 sudo[64034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:14 compute-1 python3.9[64036]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760089273.3690977-714-61199807161832/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:14 compute-1 sudo[64034]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:15 compute-1 sudo[64186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhcluxeykukpshfcrkuizfwsofzbwimj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089274.8935711-759-167202637791233/AnsiballZ_stat.py'
Oct 10 09:41:15 compute-1 sudo[64186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:15 compute-1 python3.9[64188]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:41:15 compute-1 sudo[64186]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:15 compute-1 sudo[64309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgsmnvfjexlcgeijekorfnfcbjdqafxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089274.8935711-759-167202637791233/AnsiballZ_copy.py'
Oct 10 09:41:15 compute-1 sudo[64309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:16 compute-1 python3.9[64311]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760089274.8935711-759-167202637791233/.source.yaml _original_basename=.em7r3cqh follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:16 compute-1 sudo[64309]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:16 compute-1 sudo[64461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebyhmfgmyoubvzksaaogpkkpgmnbvonc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089276.3250012-804-104701402946345/AnsiballZ_stat.py'
Oct 10 09:41:16 compute-1 sudo[64461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:16 compute-1 python3.9[64463]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:41:16 compute-1 sudo[64461]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:17 compute-1 sudo[64584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kiaosveyfdjhhuxxmqvvwuwtugaiobal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089276.3250012-804-104701402946345/AnsiballZ_copy.py'
Oct 10 09:41:17 compute-1 sudo[64584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:17 compute-1 python3.9[64586]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089276.3250012-804-104701402946345/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:17 compute-1 sudo[64584]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:18 compute-1 sudo[64736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-guplazqxexktgojgpfmrtkteqqgjmhgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089277.8636954-849-49342293439630/AnsiballZ_command.py'
Oct 10 09:41:18 compute-1 sudo[64736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:18 compute-1 python3.9[64738]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:41:18 compute-1 sudo[64736]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:19 compute-1 sudo[64889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mootifbaazecbecvzaiggjjhhoexwpqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089278.8369842-874-238565706943379/AnsiballZ_command.py'
Oct 10 09:41:19 compute-1 sudo[64889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:19 compute-1 python3.9[64891]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:41:19 compute-1 sudo[64889]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:20 compute-1 sudo[65042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suhzyqemksiwklzwfehexszwrpkhpnvk ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760089279.7323568-897-256967014357870/AnsiballZ_edpm_nftables_from_files.py'
Oct 10 09:41:20 compute-1 sudo[65042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:20 compute-1 python3[65044]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 10 09:41:20 compute-1 sudo[65042]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:21 compute-1 sudo[65194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyolytalsdrlgokjxcmusjnznabmvshg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089280.8428023-921-203022338466690/AnsiballZ_stat.py'
Oct 10 09:41:21 compute-1 sudo[65194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:21 compute-1 python3.9[65196]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:41:21 compute-1 sudo[65194]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:21 compute-1 sudo[65317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsgxwntvmctdujymrticnjuwrngmkboa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089280.8428023-921-203022338466690/AnsiballZ_copy.py'
Oct 10 09:41:21 compute-1 sudo[65317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:22 compute-1 python3.9[65319]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089280.8428023-921-203022338466690/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:22 compute-1 sudo[65317]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:22 compute-1 sudo[65469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvznmemolipealljcfqagpwomyrercez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089282.4303887-966-162624839359187/AnsiballZ_stat.py'
Oct 10 09:41:22 compute-1 sudo[65469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:23 compute-1 python3.9[65471]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:41:23 compute-1 sudo[65469]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:23 compute-1 sudo[65592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvqeozejwsagieihulhlwmwicunfckav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089282.4303887-966-162624839359187/AnsiballZ_copy.py'
Oct 10 09:41:23 compute-1 sudo[65592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:23 compute-1 python3.9[65594]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089282.4303887-966-162624839359187/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:23 compute-1 sudo[65592]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:24 compute-1 sudo[65744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggehgpbyrpebxvwhilobaxblclslkzzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089283.9711156-1012-242607318364306/AnsiballZ_stat.py'
Oct 10 09:41:24 compute-1 sudo[65744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:24 compute-1 python3.9[65746]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:41:24 compute-1 sudo[65744]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:25 compute-1 sudo[65867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smtifaeyhgdmjhorioauazxbuifofpob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089283.9711156-1012-242607318364306/AnsiballZ_copy.py'
Oct 10 09:41:25 compute-1 sudo[65867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:25 compute-1 python3.9[65869]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089283.9711156-1012-242607318364306/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:25 compute-1 sudo[65867]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:25 compute-1 sudo[66019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naiefamlzhfokouyytnzhtzmedouiuve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089285.566291-1057-238607967586954/AnsiballZ_stat.py'
Oct 10 09:41:25 compute-1 sudo[66019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:26 compute-1 python3.9[66021]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:41:26 compute-1 sudo[66019]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:26 compute-1 sudo[66142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eevukkcnktithoieyyzzsevlwwqzfbko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089285.566291-1057-238607967586954/AnsiballZ_copy.py'
Oct 10 09:41:26 compute-1 sudo[66142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:26 compute-1 python3.9[66144]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089285.566291-1057-238607967586954/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:26 compute-1 sudo[66142]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:27 compute-1 sudo[66294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oersvpcwndulxvyeoswfebnlftreejlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089287.0738225-1101-120118822970884/AnsiballZ_stat.py'
Oct 10 09:41:27 compute-1 sudo[66294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:27 compute-1 chronyd[54397]: Selected source 172.97.210.214 (pool.ntp.org)
Oct 10 09:41:27 compute-1 python3.9[66296]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:41:27 compute-1 sudo[66294]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:28 compute-1 sudo[66417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jydmwobxzkisxkcsdfterdccttrcmipe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089287.0738225-1101-120118822970884/AnsiballZ_copy.py'
Oct 10 09:41:28 compute-1 sudo[66417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:28 compute-1 python3.9[66419]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760089287.0738225-1101-120118822970884/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:28 compute-1 sudo[66417]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:29 compute-1 sudo[66569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvrhfohcsduusyqwdiucfuvvpcunkerm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089288.7099378-1146-96324161310983/AnsiballZ_file.py'
Oct 10 09:41:29 compute-1 sudo[66569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:29 compute-1 python3.9[66571]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:29 compute-1 sudo[66569]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:29 compute-1 sudo[66721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbczazqqlsbslbmajepwevkiltacbvnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089289.5499666-1170-247384321580581/AnsiballZ_command.py'
Oct 10 09:41:29 compute-1 sudo[66721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:30 compute-1 python3.9[66723]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:41:30 compute-1 sudo[66721]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:31 compute-1 sudo[66880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lahtymykqwvwcaqeidttycpgknjpfkyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089290.5163274-1194-133375304418707/AnsiballZ_blockinfile.py'
Oct 10 09:41:31 compute-1 sudo[66880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:31 compute-1 python3.9[66882]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:31 compute-1 sudo[66880]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:32 compute-1 sudo[67033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrqqwyfuasgepuaihtkraxyeahwtmfbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089291.6408234-1221-100054685945616/AnsiballZ_file.py'
Oct 10 09:41:32 compute-1 sudo[67033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:32 compute-1 python3.9[67035]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:32 compute-1 sudo[67033]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:32 compute-1 sudo[67185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aarchyzzwxzuttrkinhgwvqqfknuszen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089292.4805064-1221-157603030608569/AnsiballZ_file.py'
Oct 10 09:41:32 compute-1 sudo[67185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:33 compute-1 python3.9[67187]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:33 compute-1 sudo[67185]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:33 compute-1 sudo[67337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnmtzfesekdwpfvylaokakhkfbpazmtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089293.2699423-1266-238856278980934/AnsiballZ_mount.py'
Oct 10 09:41:33 compute-1 sudo[67337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:33 compute-1 python3.9[67339]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 10 09:41:34 compute-1 sudo[67337]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:34 compute-1 sudo[67490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfldaqvqzrdhdreonsbqelxruxmuhnoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089294.1521282-1266-2615788455284/AnsiballZ_mount.py'
Oct 10 09:41:34 compute-1 sudo[67490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:34 compute-1 python3.9[67492]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 10 09:41:34 compute-1 sudo[67490]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:35 compute-1 sshd-session[59660]: Connection closed by 192.168.122.30 port 38696
Oct 10 09:41:35 compute-1 sshd-session[59657]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:41:35 compute-1 systemd[1]: session-16.scope: Deactivated successfully.
Oct 10 09:41:35 compute-1 systemd[1]: session-16.scope: Consumed 39.781s CPU time.
Oct 10 09:41:35 compute-1 systemd-logind[789]: Session 16 logged out. Waiting for processes to exit.
Oct 10 09:41:35 compute-1 systemd-logind[789]: Removed session 16.
Oct 10 09:41:41 compute-1 sshd-session[67518]: Accepted publickey for zuul from 192.168.122.30 port 58242 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:41:41 compute-1 systemd-logind[789]: New session 17 of user zuul.
Oct 10 09:41:41 compute-1 systemd[1]: Started Session 17 of User zuul.
Oct 10 09:41:41 compute-1 sshd-session[67518]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:41:41 compute-1 sudo[67671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phjbjunluvpsginotjzrqihfiadiqznz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089301.4020529-19-17897130770154/AnsiballZ_tempfile.py'
Oct 10 09:41:42 compute-1 sudo[67671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:42 compute-1 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 10 09:41:42 compute-1 python3.9[67673]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 10 09:41:42 compute-1 sudo[67671]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:42 compute-1 sudo[67825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-temvmmxirvrfstebqljymqyxbvrksnrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089302.4262176-55-252960873952823/AnsiballZ_stat.py'
Oct 10 09:41:42 compute-1 sudo[67825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:43 compute-1 python3.9[67827]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:41:43 compute-1 sudo[67825]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:44 compute-1 sudo[67977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prrvsizivgqjiaesqdpwwnxxketfczem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089303.3909183-85-138073585394254/AnsiballZ_setup.py'
Oct 10 09:41:44 compute-1 sudo[67977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:44 compute-1 python3.9[67979]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:41:44 compute-1 sudo[67977]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:45 compute-1 sudo[68129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dltdtnmymqwghizfngkyskbbmoaenbzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089304.6530492-110-207648556774054/AnsiballZ_blockinfile.py'
Oct 10 09:41:45 compute-1 sudo[68129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:45 compute-1 python3.9[68131]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCs576V3VvbSgv48Ml4JM3ripPY5VUVh8vdkDr1njjfd7J/WrQQkTf/D0b7+eGTXj3Y1fx1/haVrDafo7g0NqcSZX+zNUgTCnYPWafo7RMG4Q7ITVk1NPIkAC1cDUxHNeWhXaOkxCz96sTkO4aNW3uoFjsp2JkJtRJmHzT7q/bc0N9x7YcWh9vwRRBiOKlV8cWMHuHUzOlloEQLN67Dht1xHWr1eO/SITqUlWY13tc/54xQuo8nBQNNX9ArhMbJz2a9AoNVUAAYFF8hWFI5ES/GL9qsCp8dnmAtrY4Rc07QmHo1RkcjXe1f6D+vymRIP3YOqIjlWp0blCTfcCGno5lBa9f5JachIsogk+5+GYx4AAbWLyxxecfKzdCxrGnQlfFgldc1xDN1RG+8HwFEAuHQDWTCDUgF67FXSHy7aVxrdzU4046193/o3VKTpSaJmFldASxFgyUeujs56OgC0qYM0zKV4jOsMBcocVHvH/1FOPWIr81XXYvu6C/Ntd6sBj0=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGSf7pFS/S1SmUMk/yMobwR+LTaQZlAhBqo7Ido5r8dg
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB1l0EOuMseZ7ulHkfzzVtKv+5A9EWRy+oXVB+t370vohhJoN3+lviS8xoR8GttJUcHVCaeioniRtOWysbNdC0I=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDarlOcgDXqRdSww3oIuqu7nGIBJToNGSnU1ljOr6GTlHTxxOoTztIrvZrPaJA8w/ixztkhFZZSdRPw4meYayY05CNu9SneiL62twzDLDsqeDPAspkh69Ljj5aGCLf6GJDiK0m2h1jLDIFtXH3lIQE9781zA7ZQ8+/xeF4yRS1/Fb5CXDG+oi/J0veCffs6t0TYmrUfSgS2H2y0UxNu7C6GoQKRde1arPLOYexvlg2RjlWM6Ex4JCqTAd9EN330Kh4HUr3r46ET8mwi1mPndibbiW0heXgrg8FeV5hBqOxQsGgLEKpX1cNAz6Rr0C5Hg1xfGcsJtep88vbJFmMyV1jNowDtJCYpprqa16Nj35HBuuz7zbzVlIdeQhEJ9I4I7eNhUxlb2/XYRXy2hfsrM9D2TP7B+bVPLjlqgqy8stBhGBCtH32ppNsXHE6uGPHMovcz2VhbP/P3sp9NQV+hF2Q0RbBXrQZkEI9YJdhxQw5hyOqwfPrEEBFy8FpzSKfBAW0=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC1nQuW/lbxVJxo9H20J7i0+Z6cHtufrF4VbA6zs724f
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB0oTxSrAqx34tAubl7rouYPI7qhs6NhoDmGr3PTW1+mypEQw0EO+pZ99zSRnweC5RBoL080AgUKo7KN+v3LDHw=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUnwO+j5aInA4FKMx5pWF8B0Zp6L17GsYV5RBbu6iT67LtXjwbz5nP4EC7t80boMHnS7DRNCAxF0FNMVhQ9o4+1E1n2mrUxxAw8YxcZTabu/lAqRb4I6RzmXdXSA9mF8O3onswi/KhJg6YUTFEWCuxWrMLco15IatKi+hNqcRUk1DreR2L/YN0W5qXkvj1z3aoph1h3Yn1lRjuQDrVHp6lCywixC2pHwYG+CrPyX+0PkXJg+JRvRdxNCIw0D0zOkJrnppmT8XpIj42JLRUGGV592XFVXHiEhZdOI2bdzPy490EfIbWF9Symqi/V5vf8SK9LMOscHXkD7jsT6VKzsUXyk6/IzzZ2TzhD173lt8HpRJyaZq4ME0ZSVYNyD58DN/CQ3xpO1c1E8Wp4fUswc4WHmb/eILnY0lDXOZt6Hb/e+K6RHu5e5GOo0KSfei/LyrqJkBQn2P8UkbJvrUh2bNw+whjvT5CmXd3rPCw+Xq3/K3Gpit1K/4pC0zGC+CQr7E=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILklS4uW4IrGY5dWZTg4VeKVeFB3jPeUpu/8f4D1+rd5
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCelD2lLiMWT09YjxTI9IfdSnHfdMuHKAAEYFKZmJg34mgwUIDqUQqoc9I6a7Ps9pRizY+UpHWL//lD7hvvhD5k=
                                             create=True mode=0644 path=/tmp/ansible.nf09jo51 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:45 compute-1 sudo[68129]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:46 compute-1 sudo[68281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgfsifqowvreqjgnpzvkcymcqilqbmcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089305.5905313-134-98477284155311/AnsiballZ_command.py'
Oct 10 09:41:46 compute-1 sudo[68281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:46 compute-1 python3.9[68283]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.nf09jo51' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:41:46 compute-1 sudo[68281]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:47 compute-1 sudo[68435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgjoxwzxruvgjxtickvsgtmqadctnija ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089306.5198443-158-272844290242562/AnsiballZ_file.py'
Oct 10 09:41:47 compute-1 sudo[68435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:47 compute-1 python3.9[68437]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.nf09jo51 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:41:47 compute-1 sudo[68435]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:47 compute-1 sshd-session[67521]: Connection closed by 192.168.122.30 port 58242
Oct 10 09:41:47 compute-1 sshd-session[67518]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:41:47 compute-1 systemd[1]: session-17.scope: Deactivated successfully.
Oct 10 09:41:47 compute-1 systemd[1]: session-17.scope: Consumed 4.355s CPU time.
Oct 10 09:41:47 compute-1 systemd-logind[789]: Session 17 logged out. Waiting for processes to exit.
Oct 10 09:41:47 compute-1 systemd-logind[789]: Removed session 17.
Oct 10 09:41:53 compute-1 sshd-session[68463]: Accepted publickey for zuul from 192.168.122.30 port 41742 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:41:53 compute-1 systemd-logind[789]: New session 18 of user zuul.
Oct 10 09:41:53 compute-1 systemd[1]: Started Session 18 of User zuul.
Oct 10 09:41:53 compute-1 sshd-session[68463]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:41:54 compute-1 python3.9[68616]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:41:55 compute-1 sudo[68770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xufjuxbdqzvhmwkwkijrlcpkfdroxkxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089314.6785355-57-101921411129815/AnsiballZ_systemd.py'
Oct 10 09:41:55 compute-1 sudo[68770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:55 compute-1 python3.9[68772]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 10 09:41:55 compute-1 sudo[68770]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:56 compute-1 sudo[68924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnwvjlsvpkirsethyovptwpsusoyihjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089315.9731472-81-90832077575239/AnsiballZ_systemd.py'
Oct 10 09:41:56 compute-1 sudo[68924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:56 compute-1 python3.9[68926]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 09:41:56 compute-1 sudo[68924]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:57 compute-1 sudo[69077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uctgldvbsutaghxspqbpnwgupdriltxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089316.9379056-108-101477772602318/AnsiballZ_command.py'
Oct 10 09:41:57 compute-1 sudo[69077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:57 compute-1 python3.9[69079]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:41:57 compute-1 sudo[69077]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:58 compute-1 sudo[69230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oevkqqqhbqgmwnpnypkdmofehrsapzki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089317.9327848-132-67773451980081/AnsiballZ_stat.py'
Oct 10 09:41:58 compute-1 sudo[69230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:58 compute-1 python3.9[69232]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:41:58 compute-1 sudo[69230]: pam_unix(sudo:session): session closed for user root
Oct 10 09:41:59 compute-1 sudo[69384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbeskgvchjtnnzotwkvwsghbmnczxokw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089319.0287864-156-103082940873930/AnsiballZ_command.py'
Oct 10 09:41:59 compute-1 sudo[69384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:41:59 compute-1 python3.9[69386]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:41:59 compute-1 sudo[69384]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:00 compute-1 sudo[69539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbiuylytqfhecnbzbhgruzhmrfrjjnwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089319.8584602-180-216438830759110/AnsiballZ_file.py'
Oct 10 09:42:00 compute-1 sudo[69539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:00 compute-1 python3.9[69541]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:42:00 compute-1 sudo[69539]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:01 compute-1 sshd-session[68466]: Connection closed by 192.168.122.30 port 41742
Oct 10 09:42:01 compute-1 sshd-session[68463]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:42:01 compute-1 systemd[1]: session-18.scope: Deactivated successfully.
Oct 10 09:42:01 compute-1 systemd[1]: session-18.scope: Consumed 5.711s CPU time.
Oct 10 09:42:01 compute-1 systemd-logind[789]: Session 18 logged out. Waiting for processes to exit.
Oct 10 09:42:01 compute-1 systemd-logind[789]: Removed session 18.
Oct 10 09:42:06 compute-1 sshd-session[69566]: Accepted publickey for zuul from 192.168.122.30 port 48544 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:42:06 compute-1 systemd-logind[789]: New session 19 of user zuul.
Oct 10 09:42:06 compute-1 systemd[1]: Started Session 19 of User zuul.
Oct 10 09:42:06 compute-1 sshd-session[69566]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:42:07 compute-1 python3.9[69719]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:42:08 compute-1 sudo[69873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxhooevgbttwtgokjwkdwyyxxeglkecu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089328.3034563-63-213216131297965/AnsiballZ_setup.py'
Oct 10 09:42:08 compute-1 sudo[69873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:08 compute-1 python3.9[69875]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:42:09 compute-1 sudo[69873]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:09 compute-1 sudo[69957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plxvmpqhueqjgvogqyfkoebgrehjaket ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089328.3034563-63-213216131297965/AnsiballZ_dnf.py'
Oct 10 09:42:09 compute-1 sudo[69957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:09 compute-1 python3.9[69959]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 10 09:42:11 compute-1 sudo[69957]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:11 compute-1 python3.9[70110]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:42:13 compute-1 python3.9[70261]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 10 09:42:14 compute-1 python3.9[70411]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:42:14 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 09:42:14 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 09:42:14 compute-1 python3.9[70562]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:42:15 compute-1 sshd-session[69569]: Connection closed by 192.168.122.30 port 48544
Oct 10 09:42:15 compute-1 sshd-session[69566]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:42:15 compute-1 systemd[1]: session-19.scope: Deactivated successfully.
Oct 10 09:42:15 compute-1 systemd[1]: session-19.scope: Consumed 6.794s CPU time.
Oct 10 09:42:15 compute-1 systemd-logind[789]: Session 19 logged out. Waiting for processes to exit.
Oct 10 09:42:15 compute-1 systemd-logind[789]: Removed session 19.
Oct 10 09:42:23 compute-1 sshd-session[70588]: Accepted publickey for zuul from 38.102.83.82 port 47934 ssh2: RSA SHA256:RwPGCkYG1Mlcunwa9tTlXvLSrYLunSGhwxtMMuIfos4
Oct 10 09:42:23 compute-1 systemd-logind[789]: New session 20 of user zuul.
Oct 10 09:42:23 compute-1 systemd[1]: Started Session 20 of User zuul.
Oct 10 09:42:23 compute-1 sshd-session[70588]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:42:23 compute-1 sudo[70664]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtcwcehoozsywrywpqdapnmkxypzonqh ; /usr/bin/python3'
Oct 10 09:42:23 compute-1 sudo[70664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:23 compute-1 useradd[70668]: new group: name=ceph-admin, GID=42478
Oct 10 09:42:23 compute-1 useradd[70668]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Oct 10 09:42:23 compute-1 sudo[70664]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:24 compute-1 sudo[70750]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oylwshymgkkiglwcxoxhxiryvsxtqiuh ; /usr/bin/python3'
Oct 10 09:42:24 compute-1 sudo[70750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:24 compute-1 sudo[70750]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:24 compute-1 sudo[70823]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syzuxouotxbwwgtgvnxknhyqopkrselh ; /usr/bin/python3'
Oct 10 09:42:24 compute-1 sudo[70823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:24 compute-1 sudo[70823]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:25 compute-1 sudo[70873]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgwslmlkjsgurmlfrnmelwndilxyeqqh ; /usr/bin/python3'
Oct 10 09:42:25 compute-1 sudo[70873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:25 compute-1 sudo[70873]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:25 compute-1 sudo[70899]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqwobwyemxoqowaufllwmegeyealwvdf ; /usr/bin/python3'
Oct 10 09:42:25 compute-1 sudo[70899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:25 compute-1 sudo[70899]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:26 compute-1 sudo[70925]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spordebqvtdnkcqkaiexvnsstfgghjjn ; /usr/bin/python3'
Oct 10 09:42:26 compute-1 sudo[70925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:26 compute-1 sudo[70925]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:26 compute-1 sudo[70951]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpprcdtpcyqopcalfzxjdzqjqbwbrgxs ; /usr/bin/python3'
Oct 10 09:42:26 compute-1 sudo[70951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:26 compute-1 sudo[70951]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:27 compute-1 sudo[71029]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tujkddrjijbhijhlwwnngbzmpcrrxhaq ; /usr/bin/python3'
Oct 10 09:42:27 compute-1 sudo[71029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:27 compute-1 sudo[71029]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:27 compute-1 sudo[71102]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdnjegvhmrxmnrqeisgnqlsfcqjcudls ; /usr/bin/python3'
Oct 10 09:42:27 compute-1 sudo[71102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:27 compute-1 sudo[71102]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:28 compute-1 sudo[71204]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acktlihgcdpwrqtkuqckexawnroluaxp ; /usr/bin/python3'
Oct 10 09:42:28 compute-1 sudo[71204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:28 compute-1 sudo[71204]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:28 compute-1 sudo[71277]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cllyonnlmsprfwgbkmzbagxzhjarceob ; /usr/bin/python3'
Oct 10 09:42:28 compute-1 sudo[71277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:28 compute-1 sudo[71277]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:29 compute-1 sudo[71327]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhxntswexhmoqsaefwfcxahyvcigzokt ; /usr/bin/python3'
Oct 10 09:42:29 compute-1 sudo[71327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:29 compute-1 python3[71329]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:42:30 compute-1 sudo[71327]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:31 compute-1 sudo[71422]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jophkhzgvxxnahyhbgzwfmdyuxavrngf ; /usr/bin/python3'
Oct 10 09:42:31 compute-1 sudo[71422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:31 compute-1 python3[71424]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 10 09:42:32 compute-1 sudo[71422]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:33 compute-1 sudo[71449]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdqmwodqgsfllvirsjxradvvceewonqs ; /usr/bin/python3'
Oct 10 09:42:33 compute-1 sudo[71449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:33 compute-1 python3[71451]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:42:33 compute-1 sudo[71449]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:33 compute-1 sudo[71475]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxdmwrywcnxrejwjsfetruqohbufxwfb ; /usr/bin/python3'
Oct 10 09:42:33 compute-1 sudo[71475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:33 compute-1 python3[71477]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:42:33 compute-1 kernel: loop: module loaded
Oct 10 09:42:33 compute-1 kernel: loop3: detected capacity change from 0 to 41943040
Oct 10 09:42:33 compute-1 sudo[71475]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:33 compute-1 sudo[71510]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grbflkcqaotmafujsgxpcwuabdeknojx ; /usr/bin/python3'
Oct 10 09:42:33 compute-1 sudo[71510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:34 compute-1 python3[71512]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:42:34 compute-1 lvm[71515]: PV /dev/loop3 not used.
Oct 10 09:42:34 compute-1 lvm[71517]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 09:42:34 compute-1 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Oct 10 09:42:34 compute-1 lvm[71527]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 09:42:34 compute-1 lvm[71527]: VG ceph_vg0 finished
Oct 10 09:42:34 compute-1 lvm[71525]:   1 logical volume(s) in volume group "ceph_vg0" now active
Oct 10 09:42:34 compute-1 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Oct 10 09:42:34 compute-1 sudo[71510]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:34 compute-1 sudo[71603]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfhxnsgxyfermlbeixqtxrpzfqknhwfg ; /usr/bin/python3'
Oct 10 09:42:34 compute-1 sudo[71603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:34 compute-1 python3[71605]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 10 09:42:34 compute-1 sudo[71603]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:35 compute-1 sudo[71676]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyoaofrepwrdwkksayymbewoxacyujbi ; /usr/bin/python3'
Oct 10 09:42:35 compute-1 sudo[71676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:35 compute-1 python3[71678]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760089354.5783622-33484-211800466340561/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:42:35 compute-1 sudo[71676]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:35 compute-1 sudo[71726]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htjvxtsplegmpinibeeodzastigjmauo ; /usr/bin/python3'
Oct 10 09:42:35 compute-1 sudo[71726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:42:36 compute-1 python3[71728]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:42:36 compute-1 systemd[1]: Reloading.
Oct 10 09:42:36 compute-1 systemd-rc-local-generator[71750]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:42:36 compute-1 systemd-sysv-generator[71756]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:42:36 compute-1 systemd[1]: Starting Ceph OSD losetup...
Oct 10 09:42:36 compute-1 bash[71768]: /dev/loop3: [64513]:4555204 (/var/lib/ceph-osd-0.img)
Oct 10 09:42:36 compute-1 systemd[1]: Finished Ceph OSD losetup.
Oct 10 09:42:36 compute-1 lvm[71769]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 09:42:36 compute-1 lvm[71769]: VG ceph_vg0 finished
Oct 10 09:42:36 compute-1 sudo[71726]: pam_unix(sudo:session): session closed for user root
Oct 10 09:42:38 compute-1 python3[71793]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:43:00 compute-1 PackageKit[30993]: daemon quit
Oct 10 09:43:00 compute-1 systemd[1]: packagekit.service: Deactivated successfully.
Oct 10 09:44:04 compute-1 sshd-session[71837]: Accepted publickey for ceph-admin from 192.168.122.100 port 52998 ssh2: RSA SHA256:iFwOnwcB2x2Q1gpAWZobZa2jCZZy75CuUHv4ViVnHA0
Oct 10 09:44:04 compute-1 systemd-logind[789]: New session 21 of user ceph-admin.
Oct 10 09:44:04 compute-1 systemd[1]: Created slice User Slice of UID 42477.
Oct 10 09:44:04 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 10 09:44:04 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 10 09:44:05 compute-1 systemd[1]: Starting User Manager for UID 42477...
Oct 10 09:44:05 compute-1 systemd[71841]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:44:05 compute-1 sshd-session[71845]: Accepted publickey for ceph-admin from 192.168.122.100 port 53006 ssh2: RSA SHA256:iFwOnwcB2x2Q1gpAWZobZa2jCZZy75CuUHv4ViVnHA0
Oct 10 09:44:05 compute-1 systemd-logind[789]: New session 23 of user ceph-admin.
Oct 10 09:44:05 compute-1 systemd[71841]: Queued start job for default target Main User Target.
Oct 10 09:44:05 compute-1 systemd[71841]: Created slice User Application Slice.
Oct 10 09:44:05 compute-1 systemd[71841]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 10 09:44:05 compute-1 systemd[71841]: Started Daily Cleanup of User's Temporary Directories.
Oct 10 09:44:05 compute-1 systemd[71841]: Reached target Paths.
Oct 10 09:44:05 compute-1 systemd[71841]: Reached target Timers.
Oct 10 09:44:05 compute-1 systemd[71841]: Starting D-Bus User Message Bus Socket...
Oct 10 09:44:05 compute-1 systemd[71841]: Starting Create User's Volatile Files and Directories...
Oct 10 09:44:05 compute-1 systemd[71841]: Listening on D-Bus User Message Bus Socket.
Oct 10 09:44:05 compute-1 systemd[71841]: Reached target Sockets.
Oct 10 09:44:05 compute-1 systemd[71841]: Finished Create User's Volatile Files and Directories.
Oct 10 09:44:05 compute-1 systemd[71841]: Reached target Basic System.
Oct 10 09:44:05 compute-1 systemd[71841]: Reached target Main User Target.
Oct 10 09:44:05 compute-1 systemd[71841]: Startup finished in 314ms.
Oct 10 09:44:05 compute-1 systemd[1]: Started User Manager for UID 42477.
Oct 10 09:44:05 compute-1 systemd[1]: Started Session 21 of User ceph-admin.
Oct 10 09:44:05 compute-1 systemd[1]: Started Session 23 of User ceph-admin.
Oct 10 09:44:05 compute-1 sshd-session[71837]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:44:05 compute-1 sshd-session[71845]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:44:05 compute-1 sudo[71862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:44:05 compute-1 sudo[71862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:05 compute-1 sudo[71862]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:05 compute-1 sshd-session[71887]: Accepted publickey for ceph-admin from 192.168.122.100 port 53012 ssh2: RSA SHA256:iFwOnwcB2x2Q1gpAWZobZa2jCZZy75CuUHv4ViVnHA0
Oct 10 09:44:05 compute-1 systemd-logind[789]: New session 24 of user ceph-admin.
Oct 10 09:44:05 compute-1 systemd[1]: Started Session 24 of User ceph-admin.
Oct 10 09:44:05 compute-1 sshd-session[71887]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:44:05 compute-1 sudo[71891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-1
Oct 10 09:44:05 compute-1 sudo[71891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:05 compute-1 sudo[71891]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:06 compute-1 sshd-session[71916]: Accepted publickey for ceph-admin from 192.168.122.100 port 53018 ssh2: RSA SHA256:iFwOnwcB2x2Q1gpAWZobZa2jCZZy75CuUHv4ViVnHA0
Oct 10 09:44:06 compute-1 systemd-logind[789]: New session 25 of user ceph-admin.
Oct 10 09:44:06 compute-1 systemd[1]: Started Session 25 of User ceph-admin.
Oct 10 09:44:06 compute-1 sshd-session[71916]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:44:06 compute-1 sudo[71920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Oct 10 09:44:06 compute-1 sudo[71920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:06 compute-1 sudo[71920]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:06 compute-1 sshd-session[71945]: Accepted publickey for ceph-admin from 192.168.122.100 port 53030 ssh2: RSA SHA256:iFwOnwcB2x2Q1gpAWZobZa2jCZZy75CuUHv4ViVnHA0
Oct 10 09:44:06 compute-1 systemd-logind[789]: New session 26 of user ceph-admin.
Oct 10 09:44:06 compute-1 systemd[1]: Started Session 26 of User ceph-admin.
Oct 10 09:44:06 compute-1 sshd-session[71945]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:44:06 compute-1 sudo[71949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:44:06 compute-1 sudo[71949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:06 compute-1 sudo[71949]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:06 compute-1 sshd-session[71974]: Accepted publickey for ceph-admin from 192.168.122.100 port 53046 ssh2: RSA SHA256:iFwOnwcB2x2Q1gpAWZobZa2jCZZy75CuUHv4ViVnHA0
Oct 10 09:44:06 compute-1 systemd-logind[789]: New session 27 of user ceph-admin.
Oct 10 09:44:06 compute-1 systemd[1]: Started Session 27 of User ceph-admin.
Oct 10 09:44:06 compute-1 sshd-session[71974]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:44:07 compute-1 sudo[71978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:44:07 compute-1 sudo[71978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:07 compute-1 sudo[71978]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:07 compute-1 sshd-session[72003]: Accepted publickey for ceph-admin from 192.168.122.100 port 53050 ssh2: RSA SHA256:iFwOnwcB2x2Q1gpAWZobZa2jCZZy75CuUHv4ViVnHA0
Oct 10 09:44:07 compute-1 systemd-logind[789]: New session 28 of user ceph-admin.
Oct 10 09:44:07 compute-1 systemd[1]: Started Session 28 of User ceph-admin.
Oct 10 09:44:07 compute-1 sshd-session[72003]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:44:07 compute-1 sudo[72007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Oct 10 09:44:07 compute-1 sudo[72007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:07 compute-1 sudo[72007]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:07 compute-1 sshd-session[72032]: Accepted publickey for ceph-admin from 192.168.122.100 port 53056 ssh2: RSA SHA256:iFwOnwcB2x2Q1gpAWZobZa2jCZZy75CuUHv4ViVnHA0
Oct 10 09:44:07 compute-1 systemd-logind[789]: New session 29 of user ceph-admin.
Oct 10 09:44:07 compute-1 systemd[1]: Started Session 29 of User ceph-admin.
Oct 10 09:44:07 compute-1 sshd-session[72032]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:44:07 compute-1 sudo[72036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:44:07 compute-1 sudo[72036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:07 compute-1 sudo[72036]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:08 compute-1 sshd-session[72061]: Accepted publickey for ceph-admin from 192.168.122.100 port 53058 ssh2: RSA SHA256:iFwOnwcB2x2Q1gpAWZobZa2jCZZy75CuUHv4ViVnHA0
Oct 10 09:44:08 compute-1 systemd-logind[789]: New session 30 of user ceph-admin.
Oct 10 09:44:08 compute-1 systemd[1]: Started Session 30 of User ceph-admin.
Oct 10 09:44:08 compute-1 sshd-session[72061]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:44:08 compute-1 sudo[72065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Oct 10 09:44:08 compute-1 sudo[72065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:08 compute-1 sudo[72065]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:08 compute-1 sshd-session[72090]: Accepted publickey for ceph-admin from 192.168.122.100 port 53060 ssh2: RSA SHA256:iFwOnwcB2x2Q1gpAWZobZa2jCZZy75CuUHv4ViVnHA0
Oct 10 09:44:08 compute-1 systemd-logind[789]: New session 31 of user ceph-admin.
Oct 10 09:44:08 compute-1 systemd[1]: Started Session 31 of User ceph-admin.
Oct 10 09:44:08 compute-1 sshd-session[72090]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:44:09 compute-1 sshd-session[72117]: Accepted publickey for ceph-admin from 192.168.122.100 port 56860 ssh2: RSA SHA256:iFwOnwcB2x2Q1gpAWZobZa2jCZZy75CuUHv4ViVnHA0
Oct 10 09:44:09 compute-1 systemd-logind[789]: New session 32 of user ceph-admin.
Oct 10 09:44:09 compute-1 systemd[1]: Started Session 32 of User ceph-admin.
Oct 10 09:44:09 compute-1 sshd-session[72117]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:44:09 compute-1 sudo[72121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Oct 10 09:44:09 compute-1 sudo[72121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:09 compute-1 sudo[72121]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:10 compute-1 sshd-session[72146]: Accepted publickey for ceph-admin from 192.168.122.100 port 56870 ssh2: RSA SHA256:iFwOnwcB2x2Q1gpAWZobZa2jCZZy75CuUHv4ViVnHA0
Oct 10 09:44:10 compute-1 systemd-logind[789]: New session 33 of user ceph-admin.
Oct 10 09:44:10 compute-1 systemd[1]: Started Session 33 of User ceph-admin.
Oct 10 09:44:10 compute-1 sshd-session[72146]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:44:10 compute-1 sudo[72150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-1
Oct 10 09:44:10 compute-1 sudo[72150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:10 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 09:44:10 compute-1 sudo[72150]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:10 compute-1 sudo[72195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:44:10 compute-1 sudo[72195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:10 compute-1 sudo[72195]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:10 compute-1 sudo[72220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Oct 10 09:44:10 compute-1 sudo[72220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:11 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 09:44:11 compute-1 sudo[72220]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:11 compute-1 sudo[72265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:44:11 compute-1 sudo[72265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:11 compute-1 sudo[72265]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:11 compute-1 sudo[72290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 10 09:44:11 compute-1 sudo[72290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:11 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 09:44:11 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 09:44:11 compute-1 sudo[72290]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:11 compute-1 sudo[72351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:44:11 compute-1 sudo[72351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:11 compute-1 sudo[72351]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:12 compute-1 sudo[72376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 09:44:12 compute-1 sudo[72376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:12 compute-1 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 72413 (sysctl)
Oct 10 09:44:12 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 09:44:12 compute-1 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct 10 09:44:12 compute-1 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct 10 09:44:13 compute-1 sudo[72376]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:13 compute-1 sudo[72435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:44:13 compute-1 sudo[72435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:13 compute-1 sudo[72435]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:13 compute-1 sudo[72460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Oct 10 09:44:13 compute-1 sudo[72460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:13 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 09:44:13 compute-1 sudo[72460]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:13 compute-1 sudo[72504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:44:13 compute-1 sudo[72504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:13 compute-1 sudo[72504]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:13 compute-1 sudo[72529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- inventory --format=json-pretty --filter-for-batch
Oct 10 09:44:13 compute-1 sudo[72529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:14 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 09:44:14 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 09:44:16 compute-1 systemd[1]: var-lib-containers-storage-overlay-compat1005408732-lower\x2dmapped.mount: Deactivated successfully.
Oct 10 09:44:30 compute-1 podman[72592]: 2025-10-10 09:44:30.719911895 +0000 UTC m=+16.533105867 container create ab6e42155a4eaa161c43b2a6ed8c878065cae377938b299623ca12cbecb47b9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 10 09:44:30 compute-1 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct 10 09:44:30 compute-1 systemd[1]: Started libpod-conmon-ab6e42155a4eaa161c43b2a6ed8c878065cae377938b299623ca12cbecb47b9c.scope.
Oct 10 09:44:30 compute-1 podman[72592]: 2025-10-10 09:44:30.706543885 +0000 UTC m=+16.519737887 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:44:30 compute-1 systemd[1]: Started libcrun container.
Oct 10 09:44:30 compute-1 podman[72592]: 2025-10-10 09:44:30.835985177 +0000 UTC m=+16.649179229 container init ab6e42155a4eaa161c43b2a6ed8c878065cae377938b299623ca12cbecb47b9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_lalande, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 10 09:44:30 compute-1 podman[72592]: 2025-10-10 09:44:30.848291156 +0000 UTC m=+16.661485168 container start ab6e42155a4eaa161c43b2a6ed8c878065cae377938b299623ca12cbecb47b9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_lalande, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:44:30 compute-1 podman[72592]: 2025-10-10 09:44:30.853205559 +0000 UTC m=+16.666399571 container attach ab6e42155a4eaa161c43b2a6ed8c878065cae377938b299623ca12cbecb47b9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_lalande, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 10 09:44:30 compute-1 hopeful_lalande[72653]: 167 167
Oct 10 09:44:30 compute-1 systemd[1]: libpod-ab6e42155a4eaa161c43b2a6ed8c878065cae377938b299623ca12cbecb47b9c.scope: Deactivated successfully.
Oct 10 09:44:30 compute-1 conmon[72653]: conmon ab6e42155a4eaa161c43 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ab6e42155a4eaa161c43b2a6ed8c878065cae377938b299623ca12cbecb47b9c.scope/container/memory.events
Oct 10 09:44:30 compute-1 podman[72592]: 2025-10-10 09:44:30.859701569 +0000 UTC m=+16.672895581 container died ab6e42155a4eaa161c43b2a6ed8c878065cae377938b299623ca12cbecb47b9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_lalande, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:44:30 compute-1 systemd[1]: var-lib-containers-storage-overlay-d9a28920cae51ace91533454b6d32f787a3d7cac527fa8157e3e696f4a292469-merged.mount: Deactivated successfully.
Oct 10 09:44:30 compute-1 podman[72592]: 2025-10-10 09:44:30.909294043 +0000 UTC m=+16.722488045 container remove ab6e42155a4eaa161c43b2a6ed8c878065cae377938b299623ca12cbecb47b9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_lalande, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:44:30 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 09:44:30 compute-1 systemd[1]: libpod-conmon-ab6e42155a4eaa161c43b2a6ed8c878065cae377938b299623ca12cbecb47b9c.scope: Deactivated successfully.
Oct 10 09:44:31 compute-1 podman[72678]: 2025-10-10 09:44:31.109658103 +0000 UTC m=+0.053168651 container create 25d9550a43a1e6af99eb6ffb860cf3df1f14886dfdace273cc78416f56380d63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_jennings, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 10 09:44:31 compute-1 systemd[1]: Started libpod-conmon-25d9550a43a1e6af99eb6ffb860cf3df1f14886dfdace273cc78416f56380d63.scope.
Oct 10 09:44:31 compute-1 systemd[1]: Started libcrun container.
Oct 10 09:44:31 compute-1 podman[72678]: 2025-10-10 09:44:31.084384386 +0000 UTC m=+0.027894924 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:44:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6659e4b428a4235e9665f29653d184acd03aeaeb86e84157aeb3a99918ae597f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6659e4b428a4235e9665f29653d184acd03aeaeb86e84157aeb3a99918ae597f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:31 compute-1 podman[72678]: 2025-10-10 09:44:31.195720701 +0000 UTC m=+0.139231259 container init 25d9550a43a1e6af99eb6ffb860cf3df1f14886dfdace273cc78416f56380d63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:44:31 compute-1 podman[72678]: 2025-10-10 09:44:31.208306697 +0000 UTC m=+0.151817245 container start 25d9550a43a1e6af99eb6ffb860cf3df1f14886dfdace273cc78416f56380d63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:44:31 compute-1 podman[72678]: 2025-10-10 09:44:31.212552821 +0000 UTC m=+0.156063399 container attach 25d9550a43a1e6af99eb6ffb860cf3df1f14886dfdace273cc78416f56380d63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:44:32 compute-1 pensive_jennings[72694]: [
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:     {
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:         "available": false,
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:         "being_replaced": false,
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:         "ceph_device_lvm": false,
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:         "lsm_data": {},
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:         "lvs": [],
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:         "path": "/dev/sr0",
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:         "rejected_reasons": [
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:             "Insufficient space (<5GB)",
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:             "Has a FileSystem"
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:         ],
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:         "sys_api": {
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:             "actuators": null,
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:             "device_nodes": [
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:                 "sr0"
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:             ],
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:             "devname": "sr0",
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:             "human_readable_size": "482.00 KB",
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:             "id_bus": "ata",
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:             "model": "QEMU DVD-ROM",
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:             "nr_requests": "2",
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:             "parent": "/dev/sr0",
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:             "partitions": {},
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:             "path": "/dev/sr0",
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:             "removable": "1",
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:             "rev": "2.5+",
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:             "ro": "0",
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:             "rotational": "0",
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:             "sas_address": "",
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:             "sas_device_handle": "",
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:             "scheduler_mode": "mq-deadline",
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:             "sectors": 0,
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:             "sectorsize": "2048",
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:             "size": 493568.0,
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:             "support_discard": "2048",
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:             "type": "disk",
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:             "vendor": "QEMU"
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:         }
Oct 10 09:44:32 compute-1 pensive_jennings[72694]:     }
Oct 10 09:44:32 compute-1 pensive_jennings[72694]: ]
Oct 10 09:44:32 compute-1 systemd[1]: libpod-25d9550a43a1e6af99eb6ffb860cf3df1f14886dfdace273cc78416f56380d63.scope: Deactivated successfully.
Oct 10 09:44:32 compute-1 systemd[1]: libpod-25d9550a43a1e6af99eb6ffb860cf3df1f14886dfdace273cc78416f56380d63.scope: Consumed 1.009s CPU time.
Oct 10 09:44:32 compute-1 podman[73651]: 2025-10-10 09:44:32.239844167 +0000 UTC m=+0.032397384 container died 25d9550a43a1e6af99eb6ffb860cf3df1f14886dfdace273cc78416f56380d63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_jennings, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:44:32 compute-1 systemd[1]: var-lib-containers-storage-overlay-6659e4b428a4235e9665f29653d184acd03aeaeb86e84157aeb3a99918ae597f-merged.mount: Deactivated successfully.
Oct 10 09:44:32 compute-1 podman[73651]: 2025-10-10 09:44:32.292373458 +0000 UTC m=+0.084926645 container remove 25d9550a43a1e6af99eb6ffb860cf3df1f14886dfdace273cc78416f56380d63 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_jennings, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 10 09:44:32 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 09:44:32 compute-1 systemd[1]: libpod-conmon-25d9550a43a1e6af99eb6ffb860cf3df1f14886dfdace273cc78416f56380d63.scope: Deactivated successfully.
Oct 10 09:44:32 compute-1 sudo[72529]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:32 compute-1 sudo[73667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 10 09:44:32 compute-1 sudo[73667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:32 compute-1 sudo[73667]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:32 compute-1 sudo[73692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph
Oct 10 09:44:32 compute-1 sudo[73692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:32 compute-1 sudo[73692]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:32 compute-1 sudo[73717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:44:32 compute-1 sudo[73717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:32 compute-1 sudo[73717]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:32 compute-1 sudo[73742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:44:32 compute-1 sudo[73742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:32 compute-1 sudo[73742]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:32 compute-1 sudo[73767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:44:32 compute-1 sudo[73767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:32 compute-1 sudo[73767]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:32 compute-1 sudo[73815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:44:32 compute-1 sudo[73815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:32 compute-1 sudo[73815]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:33 compute-1 sudo[73840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:44:33 compute-1 sudo[73840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:33 compute-1 sudo[73840]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:33 compute-1 sudo[73865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 10 09:44:33 compute-1 sudo[73865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:33 compute-1 sudo[73865]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:33 compute-1 sudo[73890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:44:33 compute-1 sudo[73890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:33 compute-1 sudo[73890]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:33 compute-1 sudo[73915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:44:33 compute-1 sudo[73915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:33 compute-1 sudo[73915]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:33 compute-1 sudo[73940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:44:33 compute-1 sudo[73940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:33 compute-1 sudo[73940]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:33 compute-1 sudo[73965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:44:33 compute-1 sudo[73965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:33 compute-1 sudo[73965]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:33 compute-1 sudo[73990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:44:33 compute-1 sudo[73990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:33 compute-1 sudo[73990]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:33 compute-1 sudo[74038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:44:33 compute-1 sudo[74038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:33 compute-1 sudo[74038]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:33 compute-1 sudo[74063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:44:33 compute-1 sudo[74063]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:33 compute-1 sudo[74063]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:33 compute-1 sudo[74088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:44:33 compute-1 sudo[74088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:33 compute-1 sudo[74088]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:33 compute-1 sudo[74118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 10 09:44:33 compute-1 sudo[74118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:33 compute-1 sudo[74118]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:34 compute-1 sudo[74143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph
Oct 10 09:44:34 compute-1 sudo[74143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:34 compute-1 sudo[74143]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:34 compute-1 sudo[74168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new
Oct 10 09:44:34 compute-1 sudo[74168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:34 compute-1 sudo[74168]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:34 compute-1 sudo[74193]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:44:34 compute-1 sudo[74193]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:34 compute-1 sudo[74193]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:34 compute-1 sudo[74218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new
Oct 10 09:44:34 compute-1 sudo[74218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:34 compute-1 sudo[74218]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:34 compute-1 sudo[74269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new
Oct 10 09:44:34 compute-1 sudo[74269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:34 compute-1 sudo[74269]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:34 compute-1 sudo[74294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new
Oct 10 09:44:34 compute-1 sudo[74294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:34 compute-1 sudo[74294]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:34 compute-1 sudo[74319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Oct 10 09:44:34 compute-1 sudo[74319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:34 compute-1 sudo[74319]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:34 compute-1 sudo[74344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:44:34 compute-1 sudo[74344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:34 compute-1 sudo[74344]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:34 compute-1 sudo[74369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:44:34 compute-1 sudo[74369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:34 compute-1 sudo[74369]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:34 compute-1 sudo[74395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new
Oct 10 09:44:34 compute-1 sudo[74395]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:34 compute-1 sudo[74395]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:34 compute-1 sudo[74420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:44:34 compute-1 sudo[74420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:34 compute-1 sudo[74420]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:34 compute-1 sudo[74445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new
Oct 10 09:44:34 compute-1 sudo[74445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:34 compute-1 sudo[74445]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:35 compute-1 sudo[74493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new
Oct 10 09:44:35 compute-1 sudo[74493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:35 compute-1 sudo[74493]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:35 compute-1 sudo[74518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new
Oct 10 09:44:35 compute-1 sudo[74518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:35 compute-1 sudo[74518]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:35 compute-1 sudo[74543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:44:35 compute-1 sudo[74543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:35 compute-1 sudo[74543]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:35 compute-1 sudo[74568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:44:35 compute-1 sudo[74568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:35 compute-1 sudo[74568]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:35 compute-1 sudo[74593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:44:35 compute-1 sudo[74593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:35 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 09:44:35 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 09:44:36 compute-1 podman[74659]: 2025-10-10 09:44:36.020065258 +0000 UTC m=+0.057219529 container create 8508a4de2cb156fe17de9f125e1ad7a481ec4477ce115ebac7a8c2984bb3f856 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_tesla, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 10 09:44:36 compute-1 systemd[1]: Started libpod-conmon-8508a4de2cb156fe17de9f125e1ad7a481ec4477ce115ebac7a8c2984bb3f856.scope.
Oct 10 09:44:36 compute-1 podman[74659]: 2025-10-10 09:44:35.992143895 +0000 UTC m=+0.029298246 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:44:36 compute-1 systemd[1]: Started libcrun container.
Oct 10 09:44:36 compute-1 podman[74659]: 2025-10-10 09:44:36.124044778 +0000 UTC m=+0.161199069 container init 8508a4de2cb156fe17de9f125e1ad7a481ec4477ce115ebac7a8c2984bb3f856 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_tesla, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:44:36 compute-1 podman[74659]: 2025-10-10 09:44:36.135006218 +0000 UTC m=+0.172160499 container start 8508a4de2cb156fe17de9f125e1ad7a481ec4477ce115ebac7a8c2984bb3f856 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:44:36 compute-1 podman[74659]: 2025-10-10 09:44:36.139645152 +0000 UTC m=+0.176799443 container attach 8508a4de2cb156fe17de9f125e1ad7a481ec4477ce115ebac7a8c2984bb3f856 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 10 09:44:36 compute-1 gallant_tesla[74676]: 167 167
Oct 10 09:44:36 compute-1 systemd[1]: libpod-8508a4de2cb156fe17de9f125e1ad7a481ec4477ce115ebac7a8c2984bb3f856.scope: Deactivated successfully.
Oct 10 09:44:36 compute-1 podman[74659]: 2025-10-10 09:44:36.141370003 +0000 UTC m=+0.178524264 container died 8508a4de2cb156fe17de9f125e1ad7a481ec4477ce115ebac7a8c2984bb3f856 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_tesla, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Oct 10 09:44:36 compute-1 podman[74659]: 2025-10-10 09:44:36.180923066 +0000 UTC m=+0.218077317 container remove 8508a4de2cb156fe17de9f125e1ad7a481ec4477ce115ebac7a8c2984bb3f856 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_tesla, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:44:36 compute-1 systemd[1]: libpod-conmon-8508a4de2cb156fe17de9f125e1ad7a481ec4477ce115ebac7a8c2984bb3f856.scope: Deactivated successfully.
Oct 10 09:44:36 compute-1 systemd[1]: Reloading.
Oct 10 09:44:36 compute-1 systemd-rc-local-generator[74711]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:44:36 compute-1 systemd-sysv-generator[74718]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:44:36 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 09:44:36 compute-1 systemd[1]: Reloading.
Oct 10 09:44:36 compute-1 systemd-rc-local-generator[74757]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:44:36 compute-1 systemd-sysv-generator[74761]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:44:36 compute-1 systemd[1]: Reached target All Ceph clusters and services.
Oct 10 09:44:36 compute-1 systemd[1]: Reloading.
Oct 10 09:44:36 compute-1 systemd-sysv-generator[74800]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:44:36 compute-1 systemd-rc-local-generator[74796]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:44:36 compute-1 systemd[1]: Reached target Ceph cluster 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:44:37 compute-1 systemd[1]: Reloading.
Oct 10 09:44:37 compute-1 systemd-rc-local-generator[74836]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:44:37 compute-1 systemd-sysv-generator[74841]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:44:37 compute-1 systemd[1]: Reloading.
Oct 10 09:44:37 compute-1 systemd-sysv-generator[74879]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:44:37 compute-1 systemd-rc-local-generator[74876]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:44:37 compute-1 systemd[1]: Created slice Slice /system/ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:44:37 compute-1 systemd[1]: Reached target System Time Set.
Oct 10 09:44:37 compute-1 systemd[1]: Reached target System Time Synchronized.
Oct 10 09:44:37 compute-1 systemd[1]: Starting Ceph crash.compute-1 for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:44:37 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 09:44:37 compute-1 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 10 09:44:37 compute-1 podman[74933]: 2025-10-10 09:44:37.829909379 +0000 UTC m=+0.025953697 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:44:37 compute-1 podman[74933]: 2025-10-10 09:44:37.987630385 +0000 UTC m=+0.183674613 container create 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 10 09:44:38 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d4484fa1eac69ede5320a83e24cd9bbe032921d8ebaa48af85faebda6c40151/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:38 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d4484fa1eac69ede5320a83e24cd9bbe032921d8ebaa48af85faebda6c40151/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:38 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d4484fa1eac69ede5320a83e24cd9bbe032921d8ebaa48af85faebda6c40151/merged/etc/ceph/ceph.client.crash.compute-1.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:38 compute-1 podman[74933]: 2025-10-10 09:44:38.085831417 +0000 UTC m=+0.281875725 container init 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 10 09:44:38 compute-1 podman[74933]: 2025-10-10 09:44:38.09553124 +0000 UTC m=+0.291575518 container start 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 10 09:44:38 compute-1 bash[74933]: 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8
Oct 10 09:44:38 compute-1 systemd[1]: Started Ceph crash.compute-1 for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:44:38 compute-1 sudo[74593]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1[74949]: INFO:ceph-crash:pinging cluster to exercise our key
Oct 10 09:44:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1[74949]: 2025-10-10T09:44:38.302+0000 7fa8e7fff640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 10 09:44:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1[74949]: 2025-10-10T09:44:38.302+0000 7fa8e7fff640 -1 AuthRegistry(0x7fa8e8069490) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 10 09:44:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1[74949]: 2025-10-10T09:44:38.304+0000 7fa8e7fff640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 10 09:44:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1[74949]: 2025-10-10T09:44:38.304+0000 7fa8e7fff640 -1 AuthRegistry(0x7fa8e7ffdff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 10 09:44:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1[74949]: 2025-10-10T09:44:38.307+0000 7fa8e6ffd640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct 10 09:44:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1[74949]: 2025-10-10T09:44:38.307+0000 7fa8e7fff640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Oct 10 09:44:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1[74949]: [errno 13] RADOS permission denied (error connecting to the cluster)
Oct 10 09:44:38 compute-1 sudo[74956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:44:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1[74949]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Oct 10 09:44:38 compute-1 sudo[74956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:38 compute-1 sudo[74956]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:38 compute-1 sudo[74991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 10 09:44:38 compute-1 sudo[74991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:38 compute-1 podman[75057]: 2025-10-10 09:44:38.929896554 +0000 UTC m=+0.082947328 container create ca9c43686deef50ef109c4343f77ec053a74acb3831ec8291638827d3a3af5b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_goldberg, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Oct 10 09:44:38 compute-1 podman[75057]: 2025-10-10 09:44:38.895966065 +0000 UTC m=+0.049016919 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:44:38 compute-1 systemd[1]: Started libpod-conmon-ca9c43686deef50ef109c4343f77ec053a74acb3831ec8291638827d3a3af5b4.scope.
Oct 10 09:44:39 compute-1 systemd[1]: Started libcrun container.
Oct 10 09:44:39 compute-1 podman[75057]: 2025-10-10 09:44:39.049574082 +0000 UTC m=+0.202624916 container init ca9c43686deef50ef109c4343f77ec053a74acb3831ec8291638827d3a3af5b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_goldberg, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:44:39 compute-1 podman[75057]: 2025-10-10 09:44:39.061225741 +0000 UTC m=+0.214276525 container start ca9c43686deef50ef109c4343f77ec053a74acb3831ec8291638827d3a3af5b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:44:39 compute-1 podman[75057]: 2025-10-10 09:44:39.065897978 +0000 UTC m=+0.218948832 container attach ca9c43686deef50ef109c4343f77ec053a74acb3831ec8291638827d3a3af5b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:44:39 compute-1 systemd[1]: libpod-ca9c43686deef50ef109c4343f77ec053a74acb3831ec8291638827d3a3af5b4.scope: Deactivated successfully.
Oct 10 09:44:39 compute-1 jovial_goldberg[75074]: 167 167
Oct 10 09:44:39 compute-1 conmon[75074]: conmon ca9c43686deef50ef109 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ca9c43686deef50ef109c4343f77ec053a74acb3831ec8291638827d3a3af5b4.scope/container/memory.events
Oct 10 09:44:39 compute-1 podman[75057]: 2025-10-10 09:44:39.07217775 +0000 UTC m=+0.225228534 container died ca9c43686deef50ef109c4343f77ec053a74acb3831ec8291638827d3a3af5b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_goldberg, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:44:39 compute-1 systemd[1]: var-lib-containers-storage-overlay-a8c4df69c2c7f5c08746ab4b6f3433f4d25f39c8376e1ef6529a9136e02ffa35-merged.mount: Deactivated successfully.
Oct 10 09:44:39 compute-1 podman[75057]: 2025-10-10 09:44:39.127225985 +0000 UTC m=+0.280276769 container remove ca9c43686deef50ef109c4343f77ec053a74acb3831ec8291638827d3a3af5b4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_goldberg, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:44:39 compute-1 systemd[1]: libpod-conmon-ca9c43686deef50ef109c4343f77ec053a74acb3831ec8291638827d3a3af5b4.scope: Deactivated successfully.
Oct 10 09:44:39 compute-1 podman[75097]: 2025-10-10 09:44:39.393174755 +0000 UTC m=+0.076400787 container create 5af9913366b30cb619ce66969c4521e65afde7dc94e9493027c037262b838c13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_grothendieck, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:44:39 compute-1 systemd[1]: Started libpod-conmon-5af9913366b30cb619ce66969c4521e65afde7dc94e9493027c037262b838c13.scope.
Oct 10 09:44:39 compute-1 podman[75097]: 2025-10-10 09:44:39.356547538 +0000 UTC m=+0.039773630 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:44:39 compute-1 systemd[1]: Started libcrun container.
Oct 10 09:44:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30d5aa3ddc881c0fe951d31995b9031110de9354756eb070cf7ef411484cf4f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30d5aa3ddc881c0fe951d31995b9031110de9354756eb070cf7ef411484cf4f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30d5aa3ddc881c0fe951d31995b9031110de9354756eb070cf7ef411484cf4f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30d5aa3ddc881c0fe951d31995b9031110de9354756eb070cf7ef411484cf4f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30d5aa3ddc881c0fe951d31995b9031110de9354756eb070cf7ef411484cf4f9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:39 compute-1 podman[75097]: 2025-10-10 09:44:39.494305042 +0000 UTC m=+0.177531124 container init 5af9913366b30cb619ce66969c4521e65afde7dc94e9493027c037262b838c13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 10 09:44:39 compute-1 podman[75097]: 2025-10-10 09:44:39.506367663 +0000 UTC m=+0.189593685 container start 5af9913366b30cb619ce66969c4521e65afde7dc94e9493027c037262b838c13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:44:39 compute-1 podman[75097]: 2025-10-10 09:44:39.511310698 +0000 UTC m=+0.194536770 container attach 5af9913366b30cb619ce66969c4521e65afde7dc94e9493027c037262b838c13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_grothendieck, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 10 09:44:39 compute-1 angry_grothendieck[75113]: --> passed data devices: 0 physical, 1 LVM
Oct 10 09:44:39 compute-1 angry_grothendieck[75113]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 09:44:40 compute-1 angry_grothendieck[75113]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 09:44:40 compute-1 angry_grothendieck[75113]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new aea3dcf0-efc7-4ff7-81f8-9509a806fb04
Oct 10 09:44:40 compute-1 angry_grothendieck[75113]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Oct 10 09:44:40 compute-1 angry_grothendieck[75113]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Oct 10 09:44:40 compute-1 angry_grothendieck[75113]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 10 09:44:40 compute-1 angry_grothendieck[75113]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Oct 10 09:44:40 compute-1 angry_grothendieck[75113]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Oct 10 09:44:40 compute-1 lvm[75178]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 09:44:40 compute-1 lvm[75178]: VG ceph_vg0 finished
Oct 10 09:44:41 compute-1 angry_grothendieck[75113]:  stderr: got monmap epoch 1
Oct 10 09:44:41 compute-1 angry_grothendieck[75113]: --> Creating keyring file for osd.1
Oct 10 09:44:41 compute-1 angry_grothendieck[75113]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Oct 10 09:44:41 compute-1 angry_grothendieck[75113]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Oct 10 09:44:41 compute-1 angry_grothendieck[75113]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid aea3dcf0-efc7-4ff7-81f8-9509a806fb04 --setuser ceph --setgroup ceph
Oct 10 09:44:44 compute-1 angry_grothendieck[75113]:  stderr: 2025-10-10T09:44:41.327+0000 7fd8dc9b4740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Oct 10 09:44:44 compute-1 angry_grothendieck[75113]:  stderr: 2025-10-10T09:44:41.597+0000 7fd8dc9b4740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Oct 10 09:44:44 compute-1 angry_grothendieck[75113]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Oct 10 09:44:44 compute-1 angry_grothendieck[75113]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 10 09:44:44 compute-1 angry_grothendieck[75113]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Oct 10 09:44:45 compute-1 angry_grothendieck[75113]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Oct 10 09:44:45 compute-1 angry_grothendieck[75113]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Oct 10 09:44:45 compute-1 angry_grothendieck[75113]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 10 09:44:45 compute-1 angry_grothendieck[75113]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 10 09:44:45 compute-1 angry_grothendieck[75113]: --> ceph-volume lvm activate successful for osd ID: 1
Oct 10 09:44:45 compute-1 angry_grothendieck[75113]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Oct 10 09:44:45 compute-1 systemd[1]: libpod-5af9913366b30cb619ce66969c4521e65afde7dc94e9493027c037262b838c13.scope: Deactivated successfully.
Oct 10 09:44:45 compute-1 systemd[1]: libpod-5af9913366b30cb619ce66969c4521e65afde7dc94e9493027c037262b838c13.scope: Consumed 2.493s CPU time.
Oct 10 09:44:45 compute-1 podman[76078]: 2025-10-10 09:44:45.141503188 +0000 UTC m=+0.029658681 container died 5af9913366b30cb619ce66969c4521e65afde7dc94e9493027c037262b838c13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_grothendieck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 10 09:44:45 compute-1 systemd[1]: var-lib-containers-storage-overlay-30d5aa3ddc881c0fe951d31995b9031110de9354756eb070cf7ef411484cf4f9-merged.mount: Deactivated successfully.
Oct 10 09:44:45 compute-1 podman[76078]: 2025-10-10 09:44:45.193377023 +0000 UTC m=+0.081532506 container remove 5af9913366b30cb619ce66969c4521e65afde7dc94e9493027c037262b838c13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_grothendieck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct 10 09:44:45 compute-1 systemd[1]: libpod-conmon-5af9913366b30cb619ce66969c4521e65afde7dc94e9493027c037262b838c13.scope: Deactivated successfully.
Oct 10 09:44:45 compute-1 sudo[74991]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:45 compute-1 sudo[76093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:44:45 compute-1 sudo[76093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:45 compute-1 sudo[76093]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:45 compute-1 sudo[76118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- lvm list --format json
Oct 10 09:44:45 compute-1 sudo[76118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:45 compute-1 podman[76184]: 2025-10-10 09:44:45.880831749 +0000 UTC m=+0.066503947 container create 2253274caeab07e4d2d3ef6b29de47bdd4623ce4626732595c00321f49c56489 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_turing, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:44:45 compute-1 systemd[1]: Started libpod-conmon-2253274caeab07e4d2d3ef6b29de47bdd4623ce4626732595c00321f49c56489.scope.
Oct 10 09:44:45 compute-1 podman[76184]: 2025-10-10 09:44:45.853840818 +0000 UTC m=+0.039513096 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:44:45 compute-1 systemd[1]: Started libcrun container.
Oct 10 09:44:45 compute-1 podman[76184]: 2025-10-10 09:44:45.975814263 +0000 UTC m=+0.161486491 container init 2253274caeab07e4d2d3ef6b29de47bdd4623ce4626732595c00321f49c56489 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_turing, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:44:45 compute-1 podman[76184]: 2025-10-10 09:44:45.982443145 +0000 UTC m=+0.168115343 container start 2253274caeab07e4d2d3ef6b29de47bdd4623ce4626732595c00321f49c56489 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:44:45 compute-1 podman[76184]: 2025-10-10 09:44:45.986170661 +0000 UTC m=+0.171842879 container attach 2253274caeab07e4d2d3ef6b29de47bdd4623ce4626732595c00321f49c56489 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_turing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 10 09:44:45 compute-1 affectionate_turing[76201]: 167 167
Oct 10 09:44:45 compute-1 systemd[1]: libpod-2253274caeab07e4d2d3ef6b29de47bdd4623ce4626732595c00321f49c56489.scope: Deactivated successfully.
Oct 10 09:44:45 compute-1 podman[76184]: 2025-10-10 09:44:45.991723905 +0000 UTC m=+0.177396123 container died 2253274caeab07e4d2d3ef6b29de47bdd4623ce4626732595c00321f49c56489 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_turing, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:44:46 compute-1 systemd[1]: var-lib-containers-storage-overlay-4ce150879014ac2a61092bd230545284892f0fd59247521b94fa09d02765a264-merged.mount: Deactivated successfully.
Oct 10 09:44:46 compute-1 podman[76184]: 2025-10-10 09:44:46.046008044 +0000 UTC m=+0.231680262 container remove 2253274caeab07e4d2d3ef6b29de47bdd4623ce4626732595c00321f49c56489 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_turing, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:44:46 compute-1 systemd[1]: libpod-conmon-2253274caeab07e4d2d3ef6b29de47bdd4623ce4626732595c00321f49c56489.scope: Deactivated successfully.
Oct 10 09:44:46 compute-1 podman[76226]: 2025-10-10 09:44:46.286960025 +0000 UTC m=+0.060478030 container create 87928d9dab747e50e8d8b32377b46dd47ac8d9c01b62b1599af06dff2b7e03b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_agnesi, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:44:46 compute-1 systemd[1]: Started libpod-conmon-87928d9dab747e50e8d8b32377b46dd47ac8d9c01b62b1599af06dff2b7e03b5.scope.
Oct 10 09:44:46 compute-1 podman[76226]: 2025-10-10 09:44:46.256425443 +0000 UTC m=+0.029943528 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:44:46 compute-1 systemd[1]: Started libcrun container.
Oct 10 09:44:46 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb0914bb070986a99aa7a70e4652a9efe925cb6d972b12acddced00b0f527500/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:46 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb0914bb070986a99aa7a70e4652a9efe925cb6d972b12acddced00b0f527500/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:46 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb0914bb070986a99aa7a70e4652a9efe925cb6d972b12acddced00b0f527500/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:46 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb0914bb070986a99aa7a70e4652a9efe925cb6d972b12acddced00b0f527500/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:46 compute-1 podman[76226]: 2025-10-10 09:44:46.377837213 +0000 UTC m=+0.151355248 container init 87928d9dab747e50e8d8b32377b46dd47ac8d9c01b62b1599af06dff2b7e03b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_agnesi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Oct 10 09:44:46 compute-1 podman[76226]: 2025-10-10 09:44:46.384015463 +0000 UTC m=+0.157533498 container start 87928d9dab747e50e8d8b32377b46dd47ac8d9c01b62b1599af06dff2b7e03b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_agnesi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 10 09:44:46 compute-1 podman[76226]: 2025-10-10 09:44:46.38811732 +0000 UTC m=+0.161635315 container attach 87928d9dab747e50e8d8b32377b46dd47ac8d9c01b62b1599af06dff2b7e03b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_agnesi, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:44:46 compute-1 confident_agnesi[76244]: {
Oct 10 09:44:46 compute-1 confident_agnesi[76244]:     "1": [
Oct 10 09:44:46 compute-1 confident_agnesi[76244]:         {
Oct 10 09:44:46 compute-1 confident_agnesi[76244]:             "devices": [
Oct 10 09:44:46 compute-1 confident_agnesi[76244]:                 "/dev/loop3"
Oct 10 09:44:46 compute-1 confident_agnesi[76244]:             ],
Oct 10 09:44:46 compute-1 confident_agnesi[76244]:             "lv_name": "ceph_lv0",
Oct 10 09:44:46 compute-1 confident_agnesi[76244]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:44:46 compute-1 confident_agnesi[76244]:             "lv_size": "21470642176",
Oct 10 09:44:46 compute-1 confident_agnesi[76244]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=NmNLD2-CQMY-EuHT-dv5T-keSY-5aCM-1JK6n1,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=21f084a3-af34-5230-afe4-ea5cd24a55f4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=aea3dcf0-efc7-4ff7-81f8-9509a806fb04,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 10 09:44:46 compute-1 confident_agnesi[76244]:             "lv_uuid": "NmNLD2-CQMY-EuHT-dv5T-keSY-5aCM-1JK6n1",
Oct 10 09:44:46 compute-1 confident_agnesi[76244]:             "name": "ceph_lv0",
Oct 10 09:44:46 compute-1 confident_agnesi[76244]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:44:46 compute-1 confident_agnesi[76244]:             "tags": {
Oct 10 09:44:46 compute-1 confident_agnesi[76244]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 10 09:44:46 compute-1 confident_agnesi[76244]:                 "ceph.block_uuid": "NmNLD2-CQMY-EuHT-dv5T-keSY-5aCM-1JK6n1",
Oct 10 09:44:46 compute-1 confident_agnesi[76244]:                 "ceph.cephx_lockbox_secret": "",
Oct 10 09:44:46 compute-1 confident_agnesi[76244]:                 "ceph.cluster_fsid": "21f084a3-af34-5230-afe4-ea5cd24a55f4",
Oct 10 09:44:46 compute-1 confident_agnesi[76244]:                 "ceph.cluster_name": "ceph",
Oct 10 09:44:46 compute-1 confident_agnesi[76244]:                 "ceph.crush_device_class": "",
Oct 10 09:44:46 compute-1 confident_agnesi[76244]:                 "ceph.encrypted": "0",
Oct 10 09:44:46 compute-1 confident_agnesi[76244]:                 "ceph.osd_fsid": "aea3dcf0-efc7-4ff7-81f8-9509a806fb04",
Oct 10 09:44:46 compute-1 confident_agnesi[76244]:                 "ceph.osd_id": "1",
Oct 10 09:44:46 compute-1 confident_agnesi[76244]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 10 09:44:46 compute-1 confident_agnesi[76244]:                 "ceph.type": "block",
Oct 10 09:44:46 compute-1 confident_agnesi[76244]:                 "ceph.vdo": "0",
Oct 10 09:44:46 compute-1 confident_agnesi[76244]:                 "ceph.with_tpm": "0"
Oct 10 09:44:46 compute-1 confident_agnesi[76244]:             },
Oct 10 09:44:46 compute-1 confident_agnesi[76244]:             "type": "block",
Oct 10 09:44:46 compute-1 confident_agnesi[76244]:             "vg_name": "ceph_vg0"
Oct 10 09:44:46 compute-1 confident_agnesi[76244]:         }
Oct 10 09:44:46 compute-1 confident_agnesi[76244]:     ]
Oct 10 09:44:46 compute-1 confident_agnesi[76244]: }
Oct 10 09:44:46 compute-1 systemd[1]: libpod-87928d9dab747e50e8d8b32377b46dd47ac8d9c01b62b1599af06dff2b7e03b5.scope: Deactivated successfully.
Oct 10 09:44:46 compute-1 podman[76226]: 2025-10-10 09:44:46.710985616 +0000 UTC m=+0.484503621 container died 87928d9dab747e50e8d8b32377b46dd47ac8d9c01b62b1599af06dff2b7e03b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_agnesi, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 10 09:44:46 compute-1 systemd[1]: var-lib-containers-storage-overlay-bb0914bb070986a99aa7a70e4652a9efe925cb6d972b12acddced00b0f527500-merged.mount: Deactivated successfully.
Oct 10 09:44:46 compute-1 podman[76226]: 2025-10-10 09:44:46.765991363 +0000 UTC m=+0.539509398 container remove 87928d9dab747e50e8d8b32377b46dd47ac8d9c01b62b1599af06dff2b7e03b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_agnesi, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:44:46 compute-1 systemd[1]: libpod-conmon-87928d9dab747e50e8d8b32377b46dd47ac8d9c01b62b1599af06dff2b7e03b5.scope: Deactivated successfully.
Oct 10 09:44:46 compute-1 sudo[76118]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:46 compute-1 sudo[76266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:44:46 compute-1 sudo[76266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:46 compute-1 sudo[76266]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:47 compute-1 sudo[76291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:44:47 compute-1 sudo[76291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:47 compute-1 podman[76356]: 2025-10-10 09:44:47.559669444 +0000 UTC m=+0.070425688 container create 8490ede53b66f8d57e2afc8de8ac40430a25f603b3d55e0331fd2e98ed546dfb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 10 09:44:47 compute-1 systemd[1]: Started libpod-conmon-8490ede53b66f8d57e2afc8de8ac40430a25f603b3d55e0331fd2e98ed546dfb.scope.
Oct 10 09:44:47 compute-1 podman[76356]: 2025-10-10 09:44:47.527015116 +0000 UTC m=+0.037771360 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:44:47 compute-1 systemd[1]: Started libcrun container.
Oct 10 09:44:47 compute-1 podman[76356]: 2025-10-10 09:44:47.664226016 +0000 UTC m=+0.174982260 container init 8490ede53b66f8d57e2afc8de8ac40430a25f603b3d55e0331fd2e98ed546dfb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_shannon, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 10 09:44:47 compute-1 podman[76356]: 2025-10-10 09:44:47.675023576 +0000 UTC m=+0.185779820 container start 8490ede53b66f8d57e2afc8de8ac40430a25f603b3d55e0331fd2e98ed546dfb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 10 09:44:47 compute-1 podman[76356]: 2025-10-10 09:44:47.679505873 +0000 UTC m=+0.190262117 container attach 8490ede53b66f8d57e2afc8de8ac40430a25f603b3d55e0331fd2e98ed546dfb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_shannon, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 10 09:44:47 compute-1 laughing_shannon[76372]: 167 167
Oct 10 09:44:47 compute-1 systemd[1]: libpod-8490ede53b66f8d57e2afc8de8ac40430a25f603b3d55e0331fd2e98ed546dfb.scope: Deactivated successfully.
Oct 10 09:44:47 compute-1 podman[76356]: 2025-10-10 09:44:47.685114938 +0000 UTC m=+0.195871172 container died 8490ede53b66f8d57e2afc8de8ac40430a25f603b3d55e0331fd2e98ed546dfb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:44:47 compute-1 systemd[1]: var-lib-containers-storage-overlay-59eb3b4202372853172443441f1ebba4fc5c6a544a342723d8efd997751c2d1c-merged.mount: Deactivated successfully.
Oct 10 09:44:47 compute-1 podman[76356]: 2025-10-10 09:44:47.741100571 +0000 UTC m=+0.251856815 container remove 8490ede53b66f8d57e2afc8de8ac40430a25f603b3d55e0331fd2e98ed546dfb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_shannon, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:44:47 compute-1 systemd[1]: libpod-conmon-8490ede53b66f8d57e2afc8de8ac40430a25f603b3d55e0331fd2e98ed546dfb.scope: Deactivated successfully.
Oct 10 09:44:48 compute-1 podman[76403]: 2025-10-10 09:44:48.119319543 +0000 UTC m=+0.068792046 container create 12131cb2df9d99bdc64c54a266e4990502cf9cd30e3482898c78fbcf3899bbf1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate-test, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 10 09:44:48 compute-1 systemd[1]: Started libpod-conmon-12131cb2df9d99bdc64c54a266e4990502cf9cd30e3482898c78fbcf3899bbf1.scope.
Oct 10 09:44:48 compute-1 podman[76403]: 2025-10-10 09:44:48.090182307 +0000 UTC m=+0.039654890 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:44:48 compute-1 systemd[1]: Started libcrun container.
Oct 10 09:44:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4250b58d3f61e4bcecbff23538f4b76ce113254658d7e6f7eb52ff009a267264/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4250b58d3f61e4bcecbff23538f4b76ce113254658d7e6f7eb52ff009a267264/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4250b58d3f61e4bcecbff23538f4b76ce113254658d7e6f7eb52ff009a267264/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4250b58d3f61e4bcecbff23538f4b76ce113254658d7e6f7eb52ff009a267264/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:48 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4250b58d3f61e4bcecbff23538f4b76ce113254658d7e6f7eb52ff009a267264/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:48 compute-1 podman[76403]: 2025-10-10 09:44:48.218086645 +0000 UTC m=+0.167559228 container init 12131cb2df9d99bdc64c54a266e4990502cf9cd30e3482898c78fbcf3899bbf1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate-test, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:44:48 compute-1 podman[76403]: 2025-10-10 09:44:48.234999184 +0000 UTC m=+0.184471697 container start 12131cb2df9d99bdc64c54a266e4990502cf9cd30e3482898c78fbcf3899bbf1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:44:48 compute-1 podman[76403]: 2025-10-10 09:44:48.239111381 +0000 UTC m=+0.188583914 container attach 12131cb2df9d99bdc64c54a266e4990502cf9cd30e3482898c78fbcf3899bbf1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate-test, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:44:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate-test[76420]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Oct 10 09:44:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate-test[76420]:                             [--no-systemd] [--no-tmpfs]
Oct 10 09:44:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate-test[76420]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 10 09:44:48 compute-1 systemd[1]: libpod-12131cb2df9d99bdc64c54a266e4990502cf9cd30e3482898c78fbcf3899bbf1.scope: Deactivated successfully.
Oct 10 09:44:48 compute-1 podman[76403]: 2025-10-10 09:44:48.42105323 +0000 UTC m=+0.370525753 container died 12131cb2df9d99bdc64c54a266e4990502cf9cd30e3482898c78fbcf3899bbf1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate-test, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 10 09:44:48 compute-1 systemd[1]: var-lib-containers-storage-overlay-4250b58d3f61e4bcecbff23538f4b76ce113254658d7e6f7eb52ff009a267264-merged.mount: Deactivated successfully.
Oct 10 09:44:48 compute-1 podman[76403]: 2025-10-10 09:44:48.481875039 +0000 UTC m=+0.431347562 container remove 12131cb2df9d99bdc64c54a266e4990502cf9cd30e3482898c78fbcf3899bbf1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 10 09:44:48 compute-1 systemd[1]: libpod-conmon-12131cb2df9d99bdc64c54a266e4990502cf9cd30e3482898c78fbcf3899bbf1.scope: Deactivated successfully.
Oct 10 09:44:48 compute-1 systemd[1]: Reloading.
Oct 10 09:44:48 compute-1 systemd-rc-local-generator[76481]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:44:48 compute-1 systemd-sysv-generator[76486]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:44:49 compute-1 systemd[1]: Reloading.
Oct 10 09:44:49 compute-1 systemd-sysv-generator[76527]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:44:49 compute-1 systemd-rc-local-generator[76523]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:44:49 compute-1 systemd[1]: Starting Ceph osd.1 for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:44:49 compute-1 podman[76581]: 2025-10-10 09:44:49.785862659 +0000 UTC m=+0.070417548 container create f51af62f30236651c8c01b389d89c2e47c790c42bb4b38e72dc0fbf016b49ac4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:44:49 compute-1 podman[76581]: 2025-10-10 09:44:49.758780386 +0000 UTC m=+0.043335335 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:44:49 compute-1 systemd[1]: Started libcrun container.
Oct 10 09:44:49 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c79c47d811e95f9587283fdeef87856ea521fbb5836ebf3e4280e7f4d5d312/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:49 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c79c47d811e95f9587283fdeef87856ea521fbb5836ebf3e4280e7f4d5d312/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:49 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c79c47d811e95f9587283fdeef87856ea521fbb5836ebf3e4280e7f4d5d312/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:49 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c79c47d811e95f9587283fdeef87856ea521fbb5836ebf3e4280e7f4d5d312/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:49 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c79c47d811e95f9587283fdeef87856ea521fbb5836ebf3e4280e7f4d5d312/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:49 compute-1 podman[76581]: 2025-10-10 09:44:49.883142092 +0000 UTC m=+0.167696991 container init f51af62f30236651c8c01b389d89c2e47c790c42bb4b38e72dc0fbf016b49ac4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:44:49 compute-1 podman[76581]: 2025-10-10 09:44:49.892784723 +0000 UTC m=+0.177339622 container start f51af62f30236651c8c01b389d89c2e47c790c42bb4b38e72dc0fbf016b49ac4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 10 09:44:49 compute-1 podman[76581]: 2025-10-10 09:44:49.897145505 +0000 UTC m=+0.181700374 container attach f51af62f30236651c8c01b389d89c2e47c790c42bb4b38e72dc0fbf016b49ac4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 10 09:44:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate[76597]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 09:44:50 compute-1 bash[76581]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 09:44:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate[76597]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 09:44:50 compute-1 bash[76581]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 09:44:50 compute-1 lvm[76678]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 09:44:50 compute-1 lvm[76678]: VG ceph_vg0 finished
Oct 10 09:44:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate[76597]: --> Failed to activate via raw: did not find any matching OSD to activate
Oct 10 09:44:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate[76597]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 09:44:50 compute-1 bash[76581]: --> Failed to activate via raw: did not find any matching OSD to activate
Oct 10 09:44:50 compute-1 bash[76581]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 09:44:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate[76597]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 09:44:50 compute-1 bash[76581]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 10 09:44:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate[76597]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 10 09:44:50 compute-1 bash[76581]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 10 09:44:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate[76597]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Oct 10 09:44:50 compute-1 bash[76581]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Oct 10 09:44:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate[76597]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Oct 10 09:44:51 compute-1 bash[76581]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Oct 10 09:44:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate[76597]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Oct 10 09:44:51 compute-1 bash[76581]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Oct 10 09:44:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate[76597]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 10 09:44:51 compute-1 bash[76581]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 10 09:44:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate[76597]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 10 09:44:51 compute-1 bash[76581]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct 10 09:44:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate[76597]: --> ceph-volume lvm activate successful for osd ID: 1
Oct 10 09:44:51 compute-1 bash[76581]: --> ceph-volume lvm activate successful for osd ID: 1
Oct 10 09:44:51 compute-1 systemd[1]: libpod-f51af62f30236651c8c01b389d89c2e47c790c42bb4b38e72dc0fbf016b49ac4.scope: Deactivated successfully.
Oct 10 09:44:51 compute-1 systemd[1]: libpod-f51af62f30236651c8c01b389d89c2e47c790c42bb4b38e72dc0fbf016b49ac4.scope: Consumed 1.598s CPU time.
Oct 10 09:44:51 compute-1 podman[76581]: 2025-10-10 09:44:51.297385163 +0000 UTC m=+1.581940062 container died f51af62f30236651c8c01b389d89c2e47c790c42bb4b38e72dc0fbf016b49ac4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 09:44:51 compute-1 systemd[1]: var-lib-containers-storage-overlay-32c79c47d811e95f9587283fdeef87856ea521fbb5836ebf3e4280e7f4d5d312-merged.mount: Deactivated successfully.
Oct 10 09:44:51 compute-1 podman[76581]: 2025-10-10 09:44:51.364793772 +0000 UTC m=+1.649348661 container remove f51af62f30236651c8c01b389d89c2e47c790c42bb4b38e72dc0fbf016b49ac4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1-activate, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:44:51 compute-1 podman[76847]: 2025-10-10 09:44:51.713800597 +0000 UTC m=+0.073256952 container create 71f3fc600b7910e73f609b30b7e76b1a6092f3f34b2743fd26a7ca2fda7fb7a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 10 09:44:51 compute-1 podman[76847]: 2025-10-10 09:44:51.685507853 +0000 UTC m=+0.044964258 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:44:51 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f7073224a203fb5f8da5e6995125ebf74cd238059d3fc3ba51e9e94c09a12e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:51 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f7073224a203fb5f8da5e6995125ebf74cd238059d3fc3ba51e9e94c09a12e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:51 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f7073224a203fb5f8da5e6995125ebf74cd238059d3fc3ba51e9e94c09a12e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:51 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f7073224a203fb5f8da5e6995125ebf74cd238059d3fc3ba51e9e94c09a12e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:51 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f7073224a203fb5f8da5e6995125ebf74cd238059d3fc3ba51e9e94c09a12e7/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:51 compute-1 podman[76847]: 2025-10-10 09:44:51.810274709 +0000 UTC m=+0.169731124 container init 71f3fc600b7910e73f609b30b7e76b1a6092f3f34b2743fd26a7ca2fda7fb7a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 10 09:44:51 compute-1 podman[76847]: 2025-10-10 09:44:51.82572835 +0000 UTC m=+0.185184705 container start 71f3fc600b7910e73f609b30b7e76b1a6092f3f34b2743fd26a7ca2fda7fb7a5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:44:51 compute-1 bash[76847]: 71f3fc600b7910e73f609b30b7e76b1a6092f3f34b2743fd26a7ca2fda7fb7a5
Oct 10 09:44:51 compute-1 systemd[1]: Started Ceph osd.1 for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:44:51 compute-1 ceph-osd[76867]: set uid:gid to 167:167 (ceph:ceph)
Oct 10 09:44:51 compute-1 ceph-osd[76867]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Oct 10 09:44:51 compute-1 ceph-osd[76867]: pidfile_write: ignore empty --pid-file
Oct 10 09:44:51 compute-1 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 09:44:51 compute-1 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 09:44:51 compute-1 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 09:44:51 compute-1 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) close
Oct 10 09:44:51 compute-1 sudo[76291]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:52 compute-1 sudo[76879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:44:52 compute-1 sudo[76879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:52 compute-1 sudo[76879]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:52 compute-1 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 09:44:52 compute-1 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 09:44:52 compute-1 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 09:44:52 compute-1 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) close
Oct 10 09:44:52 compute-1 sudo[76904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- raw list --format json
Oct 10 09:44:52 compute-1 sudo[76904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:52 compute-1 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 09:44:52 compute-1 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 09:44:52 compute-1 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 09:44:52 compute-1 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) close
Oct 10 09:44:52 compute-1 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 09:44:52 compute-1 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 09:44:52 compute-1 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 09:44:52 compute-1 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) close
Oct 10 09:44:52 compute-1 podman[76975]: 2025-10-10 09:44:52.674171962 +0000 UTC m=+0.064945165 container create f1baeb91ebe3812ae6785fda11811e7f95812537c65aa1defc36107ccaf769a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_mahavira, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2)
Oct 10 09:44:52 compute-1 systemd[1]: Started libpod-conmon-f1baeb91ebe3812ae6785fda11811e7f95812537c65aa1defc36107ccaf769a6.scope.
Oct 10 09:44:52 compute-1 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 09:44:52 compute-1 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 09:44:52 compute-1 podman[76975]: 2025-10-10 09:44:52.648862426 +0000 UTC m=+0.039635649 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:44:52 compute-1 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 09:44:52 compute-1 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) close
Oct 10 09:44:52 compute-1 systemd[1]: Started libcrun container.
Oct 10 09:44:52 compute-1 podman[76975]: 2025-10-10 09:44:52.788050726 +0000 UTC m=+0.178823919 container init f1baeb91ebe3812ae6785fda11811e7f95812537c65aa1defc36107ccaf769a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:44:52 compute-1 podman[76975]: 2025-10-10 09:44:52.799908815 +0000 UTC m=+0.190681988 container start f1baeb91ebe3812ae6785fda11811e7f95812537c65aa1defc36107ccaf769a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_mahavira, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:44:52 compute-1 podman[76975]: 2025-10-10 09:44:52.803564259 +0000 UTC m=+0.194337432 container attach f1baeb91ebe3812ae6785fda11811e7f95812537c65aa1defc36107ccaf769a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_mahavira, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:44:52 compute-1 jolly_mahavira[76991]: 167 167
Oct 10 09:44:52 compute-1 systemd[1]: libpod-f1baeb91ebe3812ae6785fda11811e7f95812537c65aa1defc36107ccaf769a6.scope: Deactivated successfully.
Oct 10 09:44:52 compute-1 podman[76975]: 2025-10-10 09:44:52.809968675 +0000 UTC m=+0.200741888 container died f1baeb91ebe3812ae6785fda11811e7f95812537c65aa1defc36107ccaf769a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_mahavira, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:44:52 compute-1 systemd[1]: var-lib-containers-storage-overlay-4eb8e020131bd03da8b29b3cff42b851106bc0d9136f120f4decea947d0f2efb-merged.mount: Deactivated successfully.
Oct 10 09:44:52 compute-1 podman[76975]: 2025-10-10 09:44:52.856757709 +0000 UTC m=+0.247530912 container remove f1baeb91ebe3812ae6785fda11811e7f95812537c65aa1defc36107ccaf769a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 10 09:44:52 compute-1 systemd[1]: libpod-conmon-f1baeb91ebe3812ae6785fda11811e7f95812537c65aa1defc36107ccaf769a6.scope: Deactivated successfully.
Oct 10 09:44:53 compute-1 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 09:44:53 compute-1 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 09:44:53 compute-1 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 09:44:53 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 10 09:44:53 compute-1 ceph-osd[76867]: bdev(0x55b094a31c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 09:44:53 compute-1 ceph-osd[76867]: bdev(0x55b094a31c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 09:44:53 compute-1 ceph-osd[76867]: bdev(0x55b094a31c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 09:44:53 compute-1 ceph-osd[76867]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct 10 09:44:53 compute-1 ceph-osd[76867]: bdev(0x55b094a31c00 /var/lib/ceph/osd/ceph-1/block) close
Oct 10 09:44:53 compute-1 podman[77015]: 2025-10-10 09:44:53.037373784 +0000 UTC m=+0.051312722 container create 17e83a9e0fd48574b34452c4a2e2ae80c94f96242e16efb80f9f2c664782fc52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_davinci, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 10 09:44:53 compute-1 systemd[1]: Started libpod-conmon-17e83a9e0fd48574b34452c4a2e2ae80c94f96242e16efb80f9f2c664782fc52.scope.
Oct 10 09:44:53 compute-1 podman[77015]: 2025-10-10 09:44:53.013538077 +0000 UTC m=+0.027477045 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:44:53 compute-1 systemd[1]: Started libcrun container.
Oct 10 09:44:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d65ebe0d69a1c556116f7717c959d347e1220ebfab71deb34d40c95b7f5b28d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d65ebe0d69a1c556116f7717c959d347e1220ebfab71deb34d40c95b7f5b28d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d65ebe0d69a1c556116f7717c959d347e1220ebfab71deb34d40c95b7f5b28d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:53 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d65ebe0d69a1c556116f7717c959d347e1220ebfab71deb34d40c95b7f5b28d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:53 compute-1 podman[77015]: 2025-10-10 09:44:53.148227271 +0000 UTC m=+0.162166289 container init 17e83a9e0fd48574b34452c4a2e2ae80c94f96242e16efb80f9f2c664782fc52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:44:53 compute-1 podman[77015]: 2025-10-10 09:44:53.172340707 +0000 UTC m=+0.186279645 container start 17e83a9e0fd48574b34452c4a2e2ae80c94f96242e16efb80f9f2c664782fc52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_davinci, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 10 09:44:53 compute-1 podman[77015]: 2025-10-10 09:44:53.176553566 +0000 UTC m=+0.190492534 container attach 17e83a9e0fd48574b34452c4a2e2ae80c94f96242e16efb80f9f2c664782fc52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_davinci, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct 10 09:44:53 compute-1 ceph-osd[76867]: bdev(0x55b094a31800 /var/lib/ceph/osd/ceph-1/block) close
Oct 10 09:44:53 compute-1 ceph-osd[76867]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Oct 10 09:44:53 compute-1 ceph-osd[76867]: load: jerasure load: lrc 
Oct 10 09:44:53 compute-1 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 09:44:53 compute-1 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 09:44:53 compute-1 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 09:44:53 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 10 09:44:53 compute-1 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) close
Oct 10 09:44:53 compute-1 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 09:44:53 compute-1 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 09:44:53 compute-1 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 09:44:53 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 10 09:44:53 compute-1 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) close
Oct 10 09:44:53 compute-1 lvm[77118]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 09:44:53 compute-1 lvm[77118]: VG ceph_vg0 finished
Oct 10 09:44:54 compute-1 hopeful_davinci[77034]: {}
Oct 10 09:44:54 compute-1 systemd[1]: libpod-17e83a9e0fd48574b34452c4a2e2ae80c94f96242e16efb80f9f2c664782fc52.scope: Deactivated successfully.
Oct 10 09:44:54 compute-1 podman[77015]: 2025-10-10 09:44:54.083657657 +0000 UTC m=+1.097596655 container died 17e83a9e0fd48574b34452c4a2e2ae80c94f96242e16efb80f9f2c664782fc52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_davinci, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 10 09:44:54 compute-1 systemd[1]: libpod-17e83a9e0fd48574b34452c4a2e2ae80c94f96242e16efb80f9f2c664782fc52.scope: Consumed 1.601s CPU time.
Oct 10 09:44:54 compute-1 ceph-osd[76867]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 10 09:44:54 compute-1 ceph-osd[76867]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) close
Oct 10 09:44:54 compute-1 systemd[1]: var-lib-containers-storage-overlay-d65ebe0d69a1c556116f7717c959d347e1220ebfab71deb34d40c95b7f5b28d5-merged.mount: Deactivated successfully.
Oct 10 09:44:54 compute-1 podman[77015]: 2025-10-10 09:44:54.144479735 +0000 UTC m=+1.158418713 container remove 17e83a9e0fd48574b34452c4a2e2ae80c94f96242e16efb80f9f2c664782fc52 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) close
Oct 10 09:44:54 compute-1 systemd[1]: libpod-conmon-17e83a9e0fd48574b34452c4a2e2ae80c94f96242e16efb80f9f2c664782fc52.scope: Deactivated successfully.
Oct 10 09:44:54 compute-1 sudo[76904]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:54 compute-1 sudo[77144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) close
Oct 10 09:44:54 compute-1 sudo[77144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:54 compute-1 sudo[77144]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:54 compute-1 sudo[77172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:44:54 compute-1 sudo[77172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:54 compute-1 sudo[77172]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bdev(0x55b0958d6c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bdev(0x55b0958d7000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bdev(0x55b0958d7000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bdev(0x55b0958d7000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bluefs mount
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bluefs mount shared_bdev_used = 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: RocksDB version: 7.9.2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Git sha 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Compile date 2025-07-17 03:12:14
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: DB SUMMARY
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: DB Session ID:  SHLX46DNWVN5ILQBHSQ1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: CURRENT file:  CURRENT
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: IDENTITY file:  IDENTITY
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                         Options.error_if_exists: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                       Options.create_if_missing: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                         Options.paranoid_checks: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                                     Options.env: 0x55b0958a7dc0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                                Options.info_log: 0x55b0958ab7a0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.max_file_opening_threads: 16
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                              Options.statistics: (nil)
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                               Options.use_fsync: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                       Options.max_log_file_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                         Options.allow_fallocate: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.use_direct_reads: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.create_missing_column_families: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                              Options.db_log_dir: 
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                                 Options.wal_dir: db.wal
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.advise_random_on_open: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.write_buffer_manager: 0x55b0959a0a00
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                            Options.rate_limiter: (nil)
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.unordered_write: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                               Options.row_cache: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                              Options.wal_filter: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.allow_ingest_behind: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.two_write_queues: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.manual_wal_flush: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.wal_compression: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.atomic_flush: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                 Options.log_readahead_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.allow_data_in_errors: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.db_host_id: __hostname__
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.max_background_jobs: 4
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.max_background_compactions: -1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.max_subcompactions: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.max_open_files: -1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.bytes_per_sync: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.max_background_flushes: -1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Compression algorithms supported:
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         kZSTD supported: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         kXpressCompression supported: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         kBZip2Compression supported: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         kLZ4Compression supported: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         kZlibCompression supported: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         kLZ4HCCompression supported: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         kSnappyCompression supported: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958abb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b094ac7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958abb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b094ac7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958abb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b094ac7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:54 compute-1 sudo[77197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958abb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b094ac7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:54 compute-1 sudo[77197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958abb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b094ac7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958abb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b094ac7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958abb60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b094ac7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958abb80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b094ac69b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958abb80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b094ac69b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958abb80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b094ac69b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 6165b24c-a467-4199-b240-d2d6d1edbf3f
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089494735052, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089494735701, "job": 1, "event": "recovery_finished"}
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: freelist init
Oct 10 09:44:54 compute-1 ceph-osd[76867]: freelist _read_cfg
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bluefs umount
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bdev(0x55b0958d7000 /var/lib/ceph/osd/ceph-1/block) close
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bdev(0x55b0958d7000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bdev(0x55b0958d7000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bdev(0x55b0958d7000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bluefs mount
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bluefs mount shared_bdev_used = 4718592
Oct 10 09:44:54 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: RocksDB version: 7.9.2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Git sha 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Compile date 2025-07-17 03:12:14
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: DB SUMMARY
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: DB Session ID:  SHLX46DNWVN5ILQBHSQ0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: CURRENT file:  CURRENT
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: IDENTITY file:  IDENTITY
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                         Options.error_if_exists: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                       Options.create_if_missing: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                         Options.paranoid_checks: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                                     Options.env: 0x55b095a44310
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                                Options.info_log: 0x55b0958ab920
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.max_file_opening_threads: 16
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                              Options.statistics: (nil)
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                               Options.use_fsync: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                       Options.max_log_file_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                         Options.allow_fallocate: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.use_direct_reads: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.create_missing_column_families: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                              Options.db_log_dir: 
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                                 Options.wal_dir: db.wal
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.advise_random_on_open: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.write_buffer_manager: 0x55b0959a0a00
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                            Options.rate_limiter: (nil)
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.unordered_write: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                               Options.row_cache: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                              Options.wal_filter: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.allow_ingest_behind: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.two_write_queues: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.manual_wal_flush: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.wal_compression: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.atomic_flush: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                 Options.log_readahead_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.allow_data_in_errors: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.db_host_id: __hostname__
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.max_background_jobs: 4
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.max_background_compactions: -1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.max_subcompactions: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.max_open_files: -1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.bytes_per_sync: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.max_background_flushes: -1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Compression algorithms supported:
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         kZSTD supported: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         kXpressCompression supported: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         kBZip2Compression supported: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         kLZ4Compression supported: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         kZlibCompression supported: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         kLZ4HCCompression supported: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         kSnappyCompression supported: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958ab680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b094ac7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958ab680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b094ac7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958ab680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b094ac7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958ab680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b094ac7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958ab680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b094ac7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958ab680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b094ac7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:54 compute-1 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958ab680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b094ac7350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958abac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b094ac69b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958abac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b094ac69b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:           Options.merge_operator: None
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter: None
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55b0958abac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55b094ac69b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:        Options.write_buffer_size: 16777216
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:  Options.max_write_buffer_number: 64
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:          Options.compression: LZ4
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:             Options.num_levels: 7
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 6165b24c-a467-4199-b240-d2d6d1edbf3f
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089494992902, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089494997270, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089494, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6165b24c-a467-4199-b240-d2d6d1edbf3f", "db_session_id": "SHLX46DNWVN5ILQBHSQ0", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089495000695, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089494, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6165b24c-a467-4199-b240-d2d6d1edbf3f", "db_session_id": "SHLX46DNWVN5ILQBHSQ0", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089495003925, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089494, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "6165b24c-a467-4199-b240-d2d6d1edbf3f", "db_session_id": "SHLX46DNWVN5ILQBHSQ0", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089495005749, "job": 1, "event": "recovery_finished"}
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55b095a84000
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: DB pointer 0x55b095a52000
Oct 10 09:44:55 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 10 09:44:55 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Oct 10 09:44:55 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 09:44:55 compute-1 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac69b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac69b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac69b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 10 09:44:55 compute-1 ceph-osd[76867]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 10 09:44:55 compute-1 ceph-osd[76867]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 10 09:44:55 compute-1 ceph-osd[76867]: _get_class not permitted to load lua
Oct 10 09:44:55 compute-1 ceph-osd[76867]: _get_class not permitted to load sdk
Oct 10 09:44:55 compute-1 ceph-osd[76867]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 10 09:44:55 compute-1 ceph-osd[76867]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 10 09:44:55 compute-1 ceph-osd[76867]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 10 09:44:55 compute-1 ceph-osd[76867]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 10 09:44:55 compute-1 ceph-osd[76867]: osd.1 0 load_pgs
Oct 10 09:44:55 compute-1 ceph-osd[76867]: osd.1 0 load_pgs opened 0 pgs
Oct 10 09:44:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1[76863]: 2025-10-10T09:44:55.048+0000 7fc8b33b2740 -1 osd.1 0 log_to_monitors true
Oct 10 09:44:55 compute-1 ceph-osd[76867]: osd.1 0 log_to_monitors true
Oct 10 09:44:55 compute-1 podman[77698]: 2025-10-10 09:44:55.441206076 +0000 UTC m=+0.091300399 container exec 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325)
Oct 10 09:44:55 compute-1 podman[77698]: 2025-10-10 09:44:55.562678868 +0000 UTC m=+0.212773111 container exec_died 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Oct 10 09:44:55 compute-1 sudo[77197]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:55 compute-1 sudo[77747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:44:55 compute-1 sudo[77747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:55 compute-1 sudo[77747]: pam_unix(sudo:session): session closed for user root
Oct 10 09:44:55 compute-1 sudo[77772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- inventory --format=json-pretty --filter-for-batch
Oct 10 09:44:55 compute-1 sudo[77772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:44:56 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 10 09:44:56 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 10 09:44:56 compute-1 podman[77837]: 2025-10-10 09:44:56.411609302 +0000 UTC m=+0.051778385 container create 670fd5585ac5deec683cba3eaedb8ad391776d6872d5483428ed3e369a5392dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_benz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 10 09:44:56 compute-1 systemd[1]: Started libpod-conmon-670fd5585ac5deec683cba3eaedb8ad391776d6872d5483428ed3e369a5392dd.scope.
Oct 10 09:44:56 compute-1 podman[77837]: 2025-10-10 09:44:56.390668268 +0000 UTC m=+0.030837351 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:44:56 compute-1 systemd[1]: Started libcrun container.
Oct 10 09:44:56 compute-1 podman[77837]: 2025-10-10 09:44:56.507360555 +0000 UTC m=+0.147530158 container init 670fd5585ac5deec683cba3eaedb8ad391776d6872d5483428ed3e369a5392dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_benz, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Oct 10 09:44:56 compute-1 podman[77837]: 2025-10-10 09:44:56.516490412 +0000 UTC m=+0.156659525 container start 670fd5585ac5deec683cba3eaedb8ad391776d6872d5483428ed3e369a5392dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_benz, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:44:56 compute-1 podman[77837]: 2025-10-10 09:44:56.520893277 +0000 UTC m=+0.161062370 container attach 670fd5585ac5deec683cba3eaedb8ad391776d6872d5483428ed3e369a5392dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_benz, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid)
Oct 10 09:44:56 compute-1 zen_benz[77854]: 167 167
Oct 10 09:44:56 compute-1 systemd[1]: libpod-670fd5585ac5deec683cba3eaedb8ad391776d6872d5483428ed3e369a5392dd.scope: Deactivated successfully.
Oct 10 09:44:56 compute-1 podman[77837]: 2025-10-10 09:44:56.525848095 +0000 UTC m=+0.166017208 container died 670fd5585ac5deec683cba3eaedb8ad391776d6872d5483428ed3e369a5392dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 10 09:44:56 compute-1 systemd[1]: var-lib-containers-storage-overlay-75616985f63a1846b7bb76c201a96b98b21081db7cc16e09605557691c1d7c39-merged.mount: Deactivated successfully.
Oct 10 09:44:56 compute-1 podman[77837]: 2025-10-10 09:44:56.582406803 +0000 UTC m=+0.222575906 container remove 670fd5585ac5deec683cba3eaedb8ad391776d6872d5483428ed3e369a5392dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:44:56 compute-1 systemd[1]: libpod-conmon-670fd5585ac5deec683cba3eaedb8ad391776d6872d5483428ed3e369a5392dd.scope: Deactivated successfully.
Oct 10 09:44:56 compute-1 podman[77876]: 2025-10-10 09:44:56.750574075 +0000 UTC m=+0.052411201 container create f99376385126aa9c8ef286cbc033a3f5ff96b59cdd5aacaca2361c8da5965ee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 10 09:44:56 compute-1 systemd[1]: Started libpod-conmon-f99376385126aa9c8ef286cbc033a3f5ff96b59cdd5aacaca2361c8da5965ee8.scope.
Oct 10 09:44:56 compute-1 podman[77876]: 2025-10-10 09:44:56.729112498 +0000 UTC m=+0.030949644 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:44:56 compute-1 systemd[1]: Started libcrun container.
Oct 10 09:44:56 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30205efec840cb82a142f27ef2e5cc1a48365a61a4648761dfecc1f4e84f10c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:56 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30205efec840cb82a142f27ef2e5cc1a48365a61a4648761dfecc1f4e84f10c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:56 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30205efec840cb82a142f27ef2e5cc1a48365a61a4648761dfecc1f4e84f10c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:56 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30205efec840cb82a142f27ef2e5cc1a48365a61a4648761dfecc1f4e84f10c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:44:56 compute-1 podman[77876]: 2025-10-10 09:44:56.846608036 +0000 UTC m=+0.148445262 container init f99376385126aa9c8ef286cbc033a3f5ff96b59cdd5aacaca2361c8da5965ee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_euclid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:44:56 compute-1 podman[77876]: 2025-10-10 09:44:56.861386219 +0000 UTC m=+0.163223495 container start f99376385126aa9c8ef286cbc033a3f5ff96b59cdd5aacaca2361c8da5965ee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_euclid, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 10 09:44:56 compute-1 podman[77876]: 2025-10-10 09:44:56.866179414 +0000 UTC m=+0.168016580 container attach f99376385126aa9c8ef286cbc033a3f5ff96b59cdd5aacaca2361c8da5965ee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_euclid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 10 09:44:57 compute-1 ceph-osd[76867]: osd.1 0 done with init, starting boot process
Oct 10 09:44:57 compute-1 ceph-osd[76867]: osd.1 0 start_boot
Oct 10 09:44:57 compute-1 ceph-osd[76867]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 10 09:44:57 compute-1 ceph-osd[76867]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 10 09:44:57 compute-1 ceph-osd[76867]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 10 09:44:57 compute-1 ceph-osd[76867]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 10 09:44:57 compute-1 ceph-osd[76867]: osd.1 0  bench count 12288000 bsize 4 KiB
Oct 10 09:44:57 compute-1 quirky_euclid[77892]: [
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:     {
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:         "available": false,
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:         "being_replaced": false,
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:         "ceph_device_lvm": false,
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:         "lsm_data": {},
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:         "lvs": [],
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:         "path": "/dev/sr0",
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:         "rejected_reasons": [
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:             "Insufficient space (<5GB)",
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:             "Has a FileSystem"
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:         ],
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:         "sys_api": {
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:             "actuators": null,
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:             "device_nodes": [
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:                 "sr0"
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:             ],
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:             "devname": "sr0",
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:             "human_readable_size": "482.00 KB",
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:             "id_bus": "ata",
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:             "model": "QEMU DVD-ROM",
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:             "nr_requests": "2",
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:             "parent": "/dev/sr0",
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:             "partitions": {},
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:             "path": "/dev/sr0",
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:             "removable": "1",
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:             "rev": "2.5+",
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:             "ro": "0",
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:             "rotational": "0",
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:             "sas_address": "",
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:             "sas_device_handle": "",
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:             "scheduler_mode": "mq-deadline",
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:             "sectors": 0,
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:             "sectorsize": "2048",
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:             "size": 493568.0,
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:             "support_discard": "2048",
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:             "type": "disk",
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:             "vendor": "QEMU"
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:         }
Oct 10 09:44:57 compute-1 quirky_euclid[77892]:     }
Oct 10 09:44:57 compute-1 quirky_euclid[77892]: ]
Oct 10 09:44:57 compute-1 systemd[1]: libpod-f99376385126aa9c8ef286cbc033a3f5ff96b59cdd5aacaca2361c8da5965ee8.scope: Deactivated successfully.
Oct 10 09:44:57 compute-1 podman[78826]: 2025-10-10 09:44:57.846305012 +0000 UTC m=+0.045591684 container died f99376385126aa9c8ef286cbc033a3f5ff96b59cdd5aacaca2361c8da5965ee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_euclid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:44:57 compute-1 systemd[1]: var-lib-containers-storage-overlay-30205efec840cb82a142f27ef2e5cc1a48365a61a4648761dfecc1f4e84f10c1-merged.mount: Deactivated successfully.
Oct 10 09:44:57 compute-1 podman[78826]: 2025-10-10 09:44:57.933706319 +0000 UTC m=+0.132992921 container remove f99376385126aa9c8ef286cbc033a3f5ff96b59cdd5aacaca2361c8da5965ee8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_euclid, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:44:57 compute-1 systemd[1]: libpod-conmon-f99376385126aa9c8ef286cbc033a3f5ff96b59cdd5aacaca2361c8da5965ee8.scope: Deactivated successfully.
Oct 10 09:44:58 compute-1 sudo[77772]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:00 compute-1 ceph-osd[76867]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 9.800 iops: 2508.856 elapsed_sec: 1.196
Oct 10 09:45:00 compute-1 ceph-osd[76867]: log_channel(cluster) log [WRN] : OSD bench result of 2508.856277 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 10 09:45:00 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1[76863]: 2025-10-10T09:45:00.906+0000 7fc8af335640 -1 osd.1 0 waiting for initial osdmap
Oct 10 09:45:00 compute-1 ceph-osd[76867]: osd.1 0 waiting for initial osdmap
Oct 10 09:45:00 compute-1 ceph-osd[76867]: osd.1 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 10 09:45:00 compute-1 ceph-osd[76867]: osd.1 12 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Oct 10 09:45:00 compute-1 ceph-osd[76867]: osd.1 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 10 09:45:00 compute-1 ceph-osd[76867]: osd.1 12 check_osdmap_features require_osd_release unknown -> squid
Oct 10 09:45:00 compute-1 ceph-osd[76867]: osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 10 09:45:00 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-osd-1[76863]: 2025-10-10T09:45:00.941+0000 7fc8aa95d640 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 10 09:45:00 compute-1 ceph-osd[76867]: osd.1 12 set_numa_affinity not setting numa affinity
Oct 10 09:45:00 compute-1 ceph-osd[76867]: osd.1 12 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Oct 10 09:45:01 compute-1 ceph-osd[76867]: osd.1 13 state: booting -> active
Oct 10 09:45:01 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:02 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=13/14 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[11,13)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:19 compute-1 sudo[78840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:45:19 compute-1 sudo[78840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:19 compute-1 sudo[78840]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:20 compute-1 sudo[78865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:45:20 compute-1 sudo[78865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:20 compute-1 podman[78933]: 2025-10-10 09:45:20.457141197 +0000 UTC m=+0.048587651 container create 381676417ddfa018f2459f206bb6cafdf187400bbdd59fb06682b10111a551ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_visvesvaraya, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:45:20 compute-1 systemd[1]: Started libpod-conmon-381676417ddfa018f2459f206bb6cafdf187400bbdd59fb06682b10111a551ff.scope.
Oct 10 09:45:20 compute-1 systemd[1]: Started libcrun container.
Oct 10 09:45:20 compute-1 podman[78933]: 2025-10-10 09:45:20.43639818 +0000 UTC m=+0.027844634 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:45:20 compute-1 podman[78933]: 2025-10-10 09:45:20.546083505 +0000 UTC m=+0.137530029 container init 381676417ddfa018f2459f206bb6cafdf187400bbdd59fb06682b10111a551ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_visvesvaraya, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:45:20 compute-1 podman[78933]: 2025-10-10 09:45:20.55554289 +0000 UTC m=+0.146989334 container start 381676417ddfa018f2459f206bb6cafdf187400bbdd59fb06682b10111a551ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 10 09:45:20 compute-1 podman[78933]: 2025-10-10 09:45:20.559092333 +0000 UTC m=+0.150538867 container attach 381676417ddfa018f2459f206bb6cafdf187400bbdd59fb06682b10111a551ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_visvesvaraya, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 10 09:45:20 compute-1 relaxed_visvesvaraya[78949]: 167 167
Oct 10 09:45:20 compute-1 systemd[1]: libpod-381676417ddfa018f2459f206bb6cafdf187400bbdd59fb06682b10111a551ff.scope: Deactivated successfully.
Oct 10 09:45:20 compute-1 podman[78933]: 2025-10-10 09:45:20.564641337 +0000 UTC m=+0.156087781 container died 381676417ddfa018f2459f206bb6cafdf187400bbdd59fb06682b10111a551ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 10 09:45:20 compute-1 systemd[1]: var-lib-containers-storage-overlay-70e6ed51daa7c150156491490c6ed0d765fcddc8c21e20769b0dd5ed3be34660-merged.mount: Deactivated successfully.
Oct 10 09:45:20 compute-1 podman[78933]: 2025-10-10 09:45:20.613917015 +0000 UTC m=+0.205363509 container remove 381676417ddfa018f2459f206bb6cafdf187400bbdd59fb06682b10111a551ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_visvesvaraya, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct 10 09:45:20 compute-1 systemd[1]: libpod-conmon-381676417ddfa018f2459f206bb6cafdf187400bbdd59fb06682b10111a551ff.scope: Deactivated successfully.
Oct 10 09:45:20 compute-1 podman[78966]: 2025-10-10 09:45:20.710690426 +0000 UTC m=+0.062554185 container create d3f5a72ddd2d281c8dec587a98ef6e31e62e447e1c803ff4f45a3065a5fa9f66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_davinci, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct 10 09:45:20 compute-1 systemd[1]: Started libpod-conmon-d3f5a72ddd2d281c8dec587a98ef6e31e62e447e1c803ff4f45a3065a5fa9f66.scope.
Oct 10 09:45:20 compute-1 systemd[1]: Started libcrun container.
Oct 10 09:45:20 compute-1 podman[78966]: 2025-10-10 09:45:20.678794118 +0000 UTC m=+0.030657977 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:45:20 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eda2fa75288cabcde1b40dc790761ae4920988fa3116c55c19d66ed925c5fcf/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:20 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eda2fa75288cabcde1b40dc790761ae4920988fa3116c55c19d66ed925c5fcf/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:20 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eda2fa75288cabcde1b40dc790761ae4920988fa3116c55c19d66ed925c5fcf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:20 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eda2fa75288cabcde1b40dc790761ae4920988fa3116c55c19d66ed925c5fcf/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:20 compute-1 podman[78966]: 2025-10-10 09:45:20.795967398 +0000 UTC m=+0.147831197 container init d3f5a72ddd2d281c8dec587a98ef6e31e62e447e1c803ff4f45a3065a5fa9f66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_davinci, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:45:20 compute-1 podman[78966]: 2025-10-10 09:45:20.810503135 +0000 UTC m=+0.162366904 container start d3f5a72ddd2d281c8dec587a98ef6e31e62e447e1c803ff4f45a3065a5fa9f66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 09:45:20 compute-1 podman[78966]: 2025-10-10 09:45:20.813813841 +0000 UTC m=+0.165677640 container attach d3f5a72ddd2d281c8dec587a98ef6e31e62e447e1c803ff4f45a3065a5fa9f66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_davinci, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 10 09:45:20 compute-1 systemd[1]: libpod-d3f5a72ddd2d281c8dec587a98ef6e31e62e447e1c803ff4f45a3065a5fa9f66.scope: Deactivated successfully.
Oct 10 09:45:20 compute-1 podman[78966]: 2025-10-10 09:45:20.92787619 +0000 UTC m=+0.279739979 container died d3f5a72ddd2d281c8dec587a98ef6e31e62e447e1c803ff4f45a3065a5fa9f66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_davinci, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:45:20 compute-1 systemd[1]: var-lib-containers-storage-overlay-7eda2fa75288cabcde1b40dc790761ae4920988fa3116c55c19d66ed925c5fcf-merged.mount: Deactivated successfully.
Oct 10 09:45:20 compute-1 podman[78966]: 2025-10-10 09:45:20.981798169 +0000 UTC m=+0.333661958 container remove d3f5a72ddd2d281c8dec587a98ef6e31e62e447e1c803ff4f45a3065a5fa9f66 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_davinci, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 10 09:45:20 compute-1 systemd[1]: libpod-conmon-d3f5a72ddd2d281c8dec587a98ef6e31e62e447e1c803ff4f45a3065a5fa9f66.scope: Deactivated successfully.
Oct 10 09:45:21 compute-1 systemd[1]: Reloading.
Oct 10 09:45:21 compute-1 systemd-rc-local-generator[79048]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:45:21 compute-1 systemd-sysv-generator[79052]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:45:21 compute-1 systemd[1]: Reloading.
Oct 10 09:45:21 compute-1 systemd-rc-local-generator[79085]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:45:21 compute-1 systemd-sysv-generator[79088]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:45:21 compute-1 systemd[1]: Starting Ceph mon.compute-1 for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:45:21 compute-1 podman[79147]: 2025-10-10 09:45:21.933073718 +0000 UTC m=+0.067701077 container create ecb3fdbc31816ba5aabb3eb17cbf5dd91e70870c193eb52bff7f160f4ea6fe2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:45:21 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/137b6273c74db66bf89d3ee88c47734d78c69866490f749cfa682a6cce6beb17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:21 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/137b6273c74db66bf89d3ee88c47734d78c69866490f749cfa682a6cce6beb17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:21 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/137b6273c74db66bf89d3ee88c47734d78c69866490f749cfa682a6cce6beb17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:21 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/137b6273c74db66bf89d3ee88c47734d78c69866490f749cfa682a6cce6beb17/merged/var/lib/ceph/mon/ceph-compute-1 supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:21 compute-1 podman[79147]: 2025-10-10 09:45:21.904061526 +0000 UTC m=+0.038688935 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:45:22 compute-1 podman[79147]: 2025-10-10 09:45:22.004597424 +0000 UTC m=+0.139224813 container init ecb3fdbc31816ba5aabb3eb17cbf5dd91e70870c193eb52bff7f160f4ea6fe2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Oct 10 09:45:22 compute-1 podman[79147]: 2025-10-10 09:45:22.018523736 +0000 UTC m=+0.153151085 container start ecb3fdbc31816ba5aabb3eb17cbf5dd91e70870c193eb52bff7f160f4ea6fe2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mon-compute-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 10 09:45:22 compute-1 bash[79147]: ecb3fdbc31816ba5aabb3eb17cbf5dd91e70870c193eb52bff7f160f4ea6fe2b
Oct 10 09:45:22 compute-1 systemd[1]: Started Ceph mon.compute-1 for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:45:22 compute-1 ceph-mon[79167]: set uid:gid to 167:167 (ceph:ceph)
Oct 10 09:45:22 compute-1 ceph-mon[79167]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Oct 10 09:45:22 compute-1 ceph-mon[79167]: pidfile_write: ignore empty --pid-file
Oct 10 09:45:22 compute-1 ceph-mon[79167]: load: jerasure load: lrc 
Oct 10 09:45:22 compute-1 sudo[78865]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: RocksDB version: 7.9.2
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: Git sha 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: Compile date 2025-07-17 03:12:14
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: DB SUMMARY
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: DB Session ID:  7GCI9JJE38KWUSAORMRB
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: CURRENT file:  CURRENT
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: IDENTITY file:  IDENTITY
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-1/store.db dir, Total Num: 0, files: 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-1/store.db: 000004.log size: 511 ; 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                         Options.error_if_exists: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                       Options.create_if_missing: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                         Options.paranoid_checks: 1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                                     Options.env: 0x5625d2d9bc20
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                                      Options.fs: PosixFileSystem
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                                Options.info_log: 0x5625d3e3fa20
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                Options.max_file_opening_threads: 16
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                              Options.statistics: (nil)
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                               Options.use_fsync: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                       Options.max_log_file_size: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                         Options.allow_fallocate: 1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                        Options.use_direct_reads: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:          Options.create_missing_column_families: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                              Options.db_log_dir: 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                                 Options.wal_dir: 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                   Options.advise_random_on_open: 1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                    Options.write_buffer_manager: 0x5625d3e43900
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                            Options.rate_limiter: (nil)
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                  Options.unordered_write: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                               Options.row_cache: None
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                              Options.wal_filter: None
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:             Options.allow_ingest_behind: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:             Options.two_write_queues: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:             Options.manual_wal_flush: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:             Options.wal_compression: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:             Options.atomic_flush: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                 Options.log_readahead_size: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:             Options.allow_data_in_errors: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:             Options.db_host_id: __hostname__
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:             Options.max_background_jobs: 2
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:             Options.max_background_compactions: -1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:             Options.max_subcompactions: 1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:             Options.max_total_wal_size: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                          Options.max_open_files: -1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                          Options.bytes_per_sync: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:       Options.compaction_readahead_size: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                  Options.max_background_flushes: -1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: Compression algorithms supported:
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:         kZSTD supported: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:         kXpressCompression supported: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:         kBZip2Compression supported: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:         kLZ4Compression supported: 1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:         kZlibCompression supported: 1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:         kLZ4HCCompression supported: 1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:         kSnappyCompression supported: 1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:           Options.merge_operator: 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:        Options.compaction_filter: None
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:        Options.compaction_filter_factory: None
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:  Options.sst_partitioner_factory: None
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5625d3e3e5c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5625d3e63350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:        Options.write_buffer_size: 33554432
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:  Options.max_write_buffer_number: 2
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:          Options.compression: NoCompression
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:       Options.prefix_extractor: nullptr
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:             Options.num_levels: 7
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                  Options.compression_opts.level: 32767
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:               Options.compression_opts.strategy: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                  Options.compression_opts.enabled: false
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                        Options.arena_block_size: 1048576
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                Options.disable_auto_compactions: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                   Options.inplace_update_support: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                           Options.bloom_locality: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                    Options.max_successive_merges: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                Options.paranoid_file_checks: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                Options.force_consistency_checks: 1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                Options.report_bg_io_stats: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                               Options.ttl: 2592000
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                       Options.enable_blob_files: false
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                           Options.min_blob_size: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                          Options.blob_file_size: 268435456
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb:                Options.blob_file_starting_level: 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-1/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 72205880-e92d-427e-a84d-d60d79c79ead
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089522094778, "job": 1, "event": "recovery_started", "wal_files": [4]}
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089522097179, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089522097446, "job": 1, "event": "recovery_finished"}
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5625d3e64e00
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: DB pointer 0x5625d3f6e000
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 09:45:22 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5625d3e63350#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 10 09:45:22 compute-1 ceph-mon[79167]: mon.compute-1 does not exist in monmap, will attempt to join an existing cluster
Oct 10 09:45:22 compute-1 ceph-mon[79167]: using public_addr v2:192.168.122.101:0/0 -> [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0]
Oct 10 09:45:22 compute-1 ceph-mon[79167]: starting mon.compute-1 rank -1 at public addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] at bind addrs [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-1 fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:45:22 compute-1 ceph-mon[79167]: mon.compute-1@-1(???) e0 preinit fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:45:22 compute-1 ceph-mon[79167]: mon.compute-1@-1(synchronizing).mds e1 new map
Oct 10 09:45:22 compute-1 ceph-mon[79167]: mon.compute-1@-1(synchronizing).mds e1 print_map
                                           e1
                                           btime 2025-10-10T09:43:15:731413+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Oct 10 09:45:22 compute-1 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct 10 09:45:22 compute-1 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in
Oct 10 09:45:22 compute-1 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in
Oct 10 09:45:22 compute-1 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in
Oct 10 09:45:22 compute-1 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e4 e4: 1 total, 0 up, 1 in
Oct 10 09:45:22 compute-1 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e5 e5: 2 total, 0 up, 2 in
Oct 10 09:45:22 compute-1 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e6 e6: 2 total, 0 up, 2 in
Oct 10 09:45:22 compute-1 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e7 e7: 2 total, 0 up, 2 in
Oct 10 09:45:22 compute-1 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e8 e8: 2 total, 0 up, 2 in
Oct 10 09:45:22 compute-1 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e9 e9: 2 total, 0 up, 2 in
Oct 10 09:45:22 compute-1 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e10 e10: 2 total, 1 up, 2 in
Oct 10 09:45:22 compute-1 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e11 e11: 2 total, 1 up, 2 in
Oct 10 09:45:22 compute-1 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e12 e12: 2 total, 1 up, 2 in
Oct 10 09:45:22 compute-1 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e13 e13: 2 total, 2 up, 2 in
Oct 10 09:45:22 compute-1 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e14 e14: 2 total, 2 up, 2 in
Oct 10 09:45:22 compute-1 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e15 e15: 2 total, 2 up, 2 in
Oct 10 09:45:22 compute-1 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e15 crush map has features 3314933000852226048, adjusting msgr requires
Oct 10 09:45:22 compute-1 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e15 crush map has features 288514051259236352, adjusting msgr requires
Oct 10 09:45:22 compute-1 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e15 crush map has features 288514051259236352, adjusting msgr requires
Oct 10 09:45:22 compute-1 ceph-mon[79167]: mon.compute-1@-1(synchronizing).osd e15 crush map has features 288514051259236352, adjusting msgr requires
Oct 10 09:45:22 compute-1 ceph-mon[79167]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Oct 10 09:45:22 compute-1 ceph-mon[79167]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:45:22 compute-1 ceph-mon[79167]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Oct 10 09:45:22 compute-1 ceph-mon[79167]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:45:22 compute-1 ceph-mon[79167]: Deploying daemon crash.compute-1 on compute-1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Oct 10 09:45:22 compute-1 ceph-mon[79167]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/4172963951' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab"}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/4172963951' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c307f4a4-39e7-4a9c-9d19-a2b8712089ab"}]': finished
Oct 10 09:45:22 compute-1 ceph-mon[79167]: osdmap e4: 1 total, 0 up, 1 in
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/234960172' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "aea3dcf0-efc7-4ff7-81f8-9509a806fb04"}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/234960172' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "aea3dcf0-efc7-4ff7-81f8-9509a806fb04"}]': finished
Oct 10 09:45:22 compute-1 ceph-mon[79167]: osdmap e5: 2 total, 0 up, 2 in
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2176337060' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1441666751' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:45:22 compute-1 ceph-mon[79167]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct 10 09:45:22 compute-1 ceph-mon[79167]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:45:22 compute-1 ceph-mon[79167]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: Deploying daemon osd.0 on compute-0
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: Deploying daemon osd.1 on compute-1
Oct 10 09:45:22 compute-1 ceph-mon[79167]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/192005781' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='osd.0 [v2:192.168.122.100:6802/2298200206,v1:192.168.122.100:6803/2298200206]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='osd.0 [v2:192.168.122.100:6802/2298200206,v1:192.168.122.100:6803/2298200206]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct 10 09:45:22 compute-1 ceph-mon[79167]: osdmap e6: 2 total, 0 up, 2 in
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='osd.0 [v2:192.168.122.100:6802/2298200206,v1:192.168.122.100:6803/2298200206]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='osd.0 [v2:192.168.122.100:6802/2298200206,v1:192.168.122.100:6803/2298200206]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 10 09:45:22 compute-1 ceph-mon[79167]: osdmap e7: 2 total, 0 up, 2 in
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='osd.1 [v2:192.168.122.101:6800/2840395396,v1:192.168.122.101:6801/2840395396]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: purged_snaps scrub starts
Oct 10 09:45:22 compute-1 ceph-mon[79167]: purged_snaps scrub ok
Oct 10 09:45:22 compute-1 ceph-mon[79167]: pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='osd.1 [v2:192.168.122.101:6800/2840395396,v1:192.168.122.101:6801/2840395396]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct 10 09:45:22 compute-1 ceph-mon[79167]: osdmap e8: 2 total, 0 up, 2 in
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='osd.1 [v2:192.168.122.101:6800/2840395396,v1:192.168.122.101:6801/2840395396]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: purged_snaps scrub starts
Oct 10 09:45:22 compute-1 ceph-mon[79167]: purged_snaps scrub ok
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='osd.1 [v2:192.168.122.101:6800/2840395396,v1:192.168.122.101:6801/2840395396]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Oct 10 09:45:22 compute-1 ceph-mon[79167]: osdmap e9: 2 total, 0 up, 2 in
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: Adjusting osd_memory_target on compute-1 to  5248M
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: OSD bench result of 8693.274022 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: osd.0 [v2:192.168.122.100:6802/2298200206,v1:192.168.122.100:6803/2298200206] boot
Oct 10 09:45:22 compute-1 ceph-mon[79167]: osdmap e10: 2 total, 1 up, 2 in
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: Adjusting osd_memory_target on compute-0 to 128.0M
Oct 10 09:45:22 compute-1 ceph-mon[79167]: Unable to set osd_memory_target on compute-0 to 134240665: error parsing value: Value '134240665' is below minimum 939524096
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct 10 09:45:22 compute-1 ceph-mon[79167]: osdmap e11: 2 total, 1 up, 2 in
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: pgmap v44: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct 10 09:45:22 compute-1 ceph-mon[79167]: osdmap e12: 2 total, 1 up, 2 in
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: OSD bench result of 2508.856277 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: osd.1 [v2:192.168.122.101:6800/2840395396,v1:192.168.122.101:6801/2840395396] boot
Oct 10 09:45:22 compute-1 ceph-mon[79167]: osdmap e13: 2 total, 2 up, 2 in
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: pgmap v47: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct 10 09:45:22 compute-1 ceph-mon[79167]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 10 09:45:22 compute-1 ceph-mon[79167]: osdmap e14: 2 total, 2 up, 2 in
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: osdmap e15: 2 total, 2 up, 2 in
Oct 10 09:45:22 compute-1 ceph-mon[79167]: pgmap v50: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:22 compute-1 ceph-mon[79167]: mgrmap e9: compute-0.xkdepb(active, since 87s)
Oct 10 09:45:22 compute-1 ceph-mon[79167]: pgmap v51: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:22 compute-1 ceph-mon[79167]: pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:22 compute-1 ceph-mon[79167]: pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:22 compute-1 ceph-mon[79167]: pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:22 compute-1 ceph-mon[79167]: pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: Updating compute-2:/etc/ceph/ceph.conf
Oct 10 09:45:22 compute-1 ceph-mon[79167]: Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:45:22 compute-1 ceph-mon[79167]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:22 compute-1 ceph-mon[79167]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:45:22 compute-1 ceph-mon[79167]: Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: Deploying daemon mon.compute-2 on compute-2
Oct 10 09:45:22 compute-1 ceph-mon[79167]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Oct 10 09:45:22 compute-1 ceph-mon[79167]: Cluster is now healthy
Oct 10 09:45:22 compute-1 ceph-mon[79167]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1167870161' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:22 compute-1 ceph-mon[79167]: mon.compute-1@-1(synchronizing).paxosservice(auth 1..7) refresh upgraded, format 0 -> 3
Oct 10 09:45:28 compute-1 ceph-mon[79167]: mon.compute-1@-1(probing) e3  my rank is now 2 (was -1)
Oct 10 09:45:28 compute-1 ceph-mon[79167]: log_channel(cluster) log [INF] : mon.compute-1 calling monitor election
Oct 10 09:45:28 compute-1 ceph-mon[79167]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 
Oct 10 09:45:28 compute-1 ceph-mon[79167]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 10 09:45:31 compute-1 ceph-mon[79167]: mon.compute-1@2(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 10 09:45:31 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Oct 10 09:45:31 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Oct 10 09:45:31 compute-1 ceph-mon[79167]: Deploying daemon mon.compute-1 on compute-1
Oct 10 09:45:31 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 10 09:45:31 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 10 09:45:31 compute-1 ceph-mon[79167]: mon.compute-0 calling monitor election
Oct 10 09:45:31 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 10 09:45:31 compute-1 ceph-mon[79167]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:31 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 10 09:45:31 compute-1 ceph-mon[79167]: mon.compute-2 calling monitor election
Oct 10 09:45:31 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:45:31 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 10 09:45:31 compute-1 ceph-mon[79167]: pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:31 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:45:31 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 10 09:45:31 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:45:31 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 10 09:45:31 compute-1 ceph-mon[79167]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Oct 10 09:45:31 compute-1 ceph-mon[79167]: monmap epoch 2
Oct 10 09:45:31 compute-1 ceph-mon[79167]: fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:45:31 compute-1 ceph-mon[79167]: last_changed 2025-10-10T09:45:19.903599+0000
Oct 10 09:45:31 compute-1 ceph-mon[79167]: created 2025-10-10T09:43:13.233588+0000
Oct 10 09:45:31 compute-1 ceph-mon[79167]: min_mon_release 19 (squid)
Oct 10 09:45:31 compute-1 ceph-mon[79167]: election_strategy: 1
Oct 10 09:45:31 compute-1 ceph-mon[79167]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct 10 09:45:31 compute-1 ceph-mon[79167]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Oct 10 09:45:31 compute-1 ceph-mon[79167]: fsmap 
Oct 10 09:45:31 compute-1 ceph-mon[79167]: osdmap e15: 2 total, 2 up, 2 in
Oct 10 09:45:31 compute-1 ceph-mon[79167]: mgrmap e9: compute-0.xkdepb(active, since 106s)
Oct 10 09:45:31 compute-1 ceph-mon[79167]: overall HEALTH_OK
Oct 10 09:45:31 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:31 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:31 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:31 compute-1 ceph-mon[79167]: pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:31 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:31 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.gkrssp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 10 09:45:31 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.gkrssp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 10 09:45:31 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 09:45:31 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:31 compute-1 ceph-mon[79167]: Deploying daemon mgr.compute-2.gkrssp on compute-2
Oct 10 09:45:31 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:45:31 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 10 09:45:31 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 10 09:45:31 compute-1 ceph-mon[79167]: mgrc update_daemon_metadata mon.compute-1 metadata {addrs=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-1,kernel_description=#1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025,kernel_version=5.14.0-621.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864356,os=Linux}
Oct 10 09:45:31 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 10 09:45:31 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:45:31 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 10 09:45:31 compute-1 ceph-mon[79167]: mon.compute-0 calling monitor election
Oct 10 09:45:31 compute-1 ceph-mon[79167]: mon.compute-2 calling monitor election
Oct 10 09:45:31 compute-1 ceph-mon[79167]: pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:31 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:45:31 compute-1 ceph-mon[79167]: mon.compute-1 calling monitor election
Oct 10 09:45:31 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:45:31 compute-1 ceph-mon[79167]: pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:31 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:45:31 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:45:31 compute-1 ceph-mon[79167]: pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:31 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:45:31 compute-1 ceph-mon[79167]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct 10 09:45:31 compute-1 ceph-mon[79167]: monmap epoch 3
Oct 10 09:45:31 compute-1 ceph-mon[79167]: fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:45:31 compute-1 ceph-mon[79167]: last_changed 2025-10-10T09:45:26.181993+0000
Oct 10 09:45:31 compute-1 ceph-mon[79167]: created 2025-10-10T09:43:13.233588+0000
Oct 10 09:45:31 compute-1 ceph-mon[79167]: min_mon_release 19 (squid)
Oct 10 09:45:31 compute-1 ceph-mon[79167]: election_strategy: 1
Oct 10 09:45:31 compute-1 ceph-mon[79167]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct 10 09:45:31 compute-1 ceph-mon[79167]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Oct 10 09:45:31 compute-1 ceph-mon[79167]: 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Oct 10 09:45:31 compute-1 ceph-mon[79167]: fsmap 
Oct 10 09:45:31 compute-1 ceph-mon[79167]: osdmap e15: 2 total, 2 up, 2 in
Oct 10 09:45:31 compute-1 ceph-mon[79167]: mgrmap e9: compute-0.xkdepb(active, since 112s)
Oct 10 09:45:31 compute-1 ceph-mon[79167]: overall HEALTH_OK
Oct 10 09:45:31 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:31 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:31 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:31 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:31 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.rfugxc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 10 09:45:31 compute-1 sudo[79206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:45:31 compute-1 sudo[79206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:31 compute-1 sudo[79206]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:31 compute-1 sudo[79231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:45:31 compute-1 sudo[79231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:31 compute-1 podman[79298]: 2025-10-10 09:45:31.903852236 +0000 UTC m=+0.046855956 container create f2a11b2900ad52095fdfd66a114a057b1d4aa8b7989132e69175ebbeaa0691ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_khorana, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:45:31 compute-1 systemd[1]: Started libpod-conmon-f2a11b2900ad52095fdfd66a114a057b1d4aa8b7989132e69175ebbeaa0691ed.scope.
Oct 10 09:45:31 compute-1 podman[79298]: 2025-10-10 09:45:31.882732699 +0000 UTC m=+0.025736459 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:45:31 compute-1 systemd[1]: Started libcrun container.
Oct 10 09:45:31 compute-1 podman[79298]: 2025-10-10 09:45:31.996636043 +0000 UTC m=+0.139639783 container init f2a11b2900ad52095fdfd66a114a057b1d4aa8b7989132e69175ebbeaa0691ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_khorana, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:45:32 compute-1 podman[79298]: 2025-10-10 09:45:32.005108014 +0000 UTC m=+0.148111744 container start f2a11b2900ad52095fdfd66a114a057b1d4aa8b7989132e69175ebbeaa0691ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_khorana, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:45:32 compute-1 podman[79298]: 2025-10-10 09:45:32.008990834 +0000 UTC m=+0.151994574 container attach f2a11b2900ad52095fdfd66a114a057b1d4aa8b7989132e69175ebbeaa0691ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_khorana, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 09:45:32 compute-1 friendly_khorana[79314]: 167 167
Oct 10 09:45:32 compute-1 systemd[1]: libpod-f2a11b2900ad52095fdfd66a114a057b1d4aa8b7989132e69175ebbeaa0691ed.scope: Deactivated successfully.
Oct 10 09:45:32 compute-1 podman[79298]: 2025-10-10 09:45:32.011247003 +0000 UTC m=+0.154250763 container died f2a11b2900ad52095fdfd66a114a057b1d4aa8b7989132e69175ebbeaa0691ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 10 09:45:32 compute-1 systemd[1]: var-lib-containers-storage-overlay-b8c9af3d15f3bacf7cda741524897af99997b21e09a238c34b6eaf4bad8d5f76-merged.mount: Deactivated successfully.
Oct 10 09:45:32 compute-1 podman[79298]: 2025-10-10 09:45:32.049841664 +0000 UTC m=+0.192845394 container remove f2a11b2900ad52095fdfd66a114a057b1d4aa8b7989132e69175ebbeaa0691ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_khorana, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:45:32 compute-1 systemd[1]: libpod-conmon-f2a11b2900ad52095fdfd66a114a057b1d4aa8b7989132e69175ebbeaa0691ed.scope: Deactivated successfully.
Oct 10 09:45:32 compute-1 systemd[1]: Reloading.
Oct 10 09:45:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e15 _set_new_cache_sizes cache_size:1019933712 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:45:32 compute-1 systemd-rc-local-generator[79353]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:45:32 compute-1 systemd-sysv-generator[79361]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:45:32 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.rfugxc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 10 09:45:32 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 09:45:32 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:32 compute-1 ceph-mon[79167]: Deploying daemon mgr.compute-1.rfugxc on compute-1
Oct 10 09:45:32 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:45:32 compute-1 systemd[1]: Reloading.
Oct 10 09:45:32 compute-1 systemd-rc-local-generator[79395]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:45:32 compute-1 systemd-sysv-generator[79398]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:45:32 compute-1 systemd[1]: Starting Ceph mgr.compute-1.rfugxc for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:45:33 compute-1 podman[79457]: 2025-10-10 09:45:33.072855614 +0000 UTC m=+0.063706053 container create 90ca3b90e3affd6ecfdba94c0fe0432520f03284365e8041b5f975340484362b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Oct 10 09:45:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d98c02601e79a520e9c8db483674523289d81151aca749858287a099f8938da2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d98c02601e79a520e9c8db483674523289d81151aca749858287a099f8938da2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d98c02601e79a520e9c8db483674523289d81151aca749858287a099f8938da2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d98c02601e79a520e9c8db483674523289d81151aca749858287a099f8938da2/merged/var/lib/ceph/mgr/ceph-compute-1.rfugxc supports timestamps until 2038 (0x7fffffff)
Oct 10 09:45:33 compute-1 podman[79457]: 2025-10-10 09:45:33.04031747 +0000 UTC m=+0.031167919 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:45:33 compute-1 podman[79457]: 2025-10-10 09:45:33.145512159 +0000 UTC m=+0.136362658 container init 90ca3b90e3affd6ecfdba94c0fe0432520f03284365e8041b5f975340484362b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 10 09:45:33 compute-1 podman[79457]: 2025-10-10 09:45:33.157967082 +0000 UTC m=+0.148817521 container start 90ca3b90e3affd6ecfdba94c0fe0432520f03284365e8041b5f975340484362b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Oct 10 09:45:33 compute-1 bash[79457]: 90ca3b90e3affd6ecfdba94c0fe0432520f03284365e8041b5f975340484362b
Oct 10 09:45:33 compute-1 systemd[1]: Started Ceph mgr.compute-1.rfugxc for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:45:33 compute-1 ceph-mgr[79476]: set uid:gid to 167:167 (ceph:ceph)
Oct 10 09:45:33 compute-1 ceph-mgr[79476]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 10 09:45:33 compute-1 ceph-mgr[79476]: pidfile_write: ignore empty --pid-file
Oct 10 09:45:33 compute-1 sudo[79231]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:33 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'alerts'
Oct 10 09:45:33 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3667835426' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 09:45:33 compute-1 ceph-mon[79167]: pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct 10 09:45:33 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:33 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:33 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:33 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:33 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 10 09:45:33 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 10 09:45:33 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:33 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e16 e16: 2 total, 2 up, 2 in
Oct 10 09:45:33 compute-1 ceph-mgr[79476]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 09:45:33 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'balancer'
Oct 10 09:45:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:33.343+0000 7f48edd15140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 09:45:33 compute-1 ceph-mgr[79476]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 09:45:33 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'cephadm'
Oct 10 09:45:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:33.419+0000 7f48edd15140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 09:45:33 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 16 pg[2.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:34 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'crash'
Oct 10 09:45:34 compute-1 ceph-mgr[79476]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 09:45:34 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'dashboard'
Oct 10 09:45:34 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:34.243+0000 7f48edd15140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 09:45:34 compute-1 ceph-mon[79167]: Deploying daemon crash.compute-2 on compute-2
Oct 10 09:45:34 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3667835426' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 09:45:34 compute-1 ceph-mon[79167]: osdmap e16: 2 total, 2 up, 2 in
Oct 10 09:45:34 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3269086226' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 09:45:34 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e17 e17: 2 total, 2 up, 2 in
Oct 10 09:45:34 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 17 pg[2.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:34 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'devicehealth'
Oct 10 09:45:34 compute-1 ceph-mgr[79476]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 09:45:34 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'diskprediction_local'
Oct 10 09:45:34 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:34.860+0000 7f48edd15140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 09:45:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 10 09:45:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 10 09:45:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]:   from numpy import show_config as show_numpy_config
Oct 10 09:45:35 compute-1 ceph-mgr[79476]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 09:45:35 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'influx'
Oct 10 09:45:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:35.024+0000 7f48edd15140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 09:45:35 compute-1 ceph-mgr[79476]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 09:45:35 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'insights'
Oct 10 09:45:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:35.101+0000 7f48edd15140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 09:45:35 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'iostat'
Oct 10 09:45:35 compute-1 ceph-mgr[79476]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 09:45:35 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'k8sevents'
Oct 10 09:45:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:35.236+0000 7f48edd15140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 09:45:35 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3269086226' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 09:45:35 compute-1 ceph-mon[79167]: osdmap e17: 2 total, 2 up, 2 in
Oct 10 09:45:35 compute-1 ceph-mon[79167]: pgmap v68: 3 pgs: 2 unknown, 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 10 09:45:35 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:35 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:35 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:35 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:35 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:45:35 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:45:35 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:35 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:45:35 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:35 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1727378227' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 09:45:35 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e18 e18: 2 total, 2 up, 2 in
Oct 10 09:45:35 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'localpool'
Oct 10 09:45:35 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'mds_autoscaler'
Oct 10 09:45:35 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'mirroring'
Oct 10 09:45:36 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'nfs'
Oct 10 09:45:36 compute-1 ceph-mgr[79476]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 09:45:36 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'orchestrator'
Oct 10 09:45:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:36.251+0000 7f48edd15140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 09:45:36 compute-1 ceph-mon[79167]: Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 10 09:45:36 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1727378227' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 09:45:36 compute-1 ceph-mon[79167]: osdmap e18: 2 total, 2 up, 2 in
Oct 10 09:45:36 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:36 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1828731644' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 09:45:36 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e19 e19: 2 total, 2 up, 2 in
Oct 10 09:45:36 compute-1 ceph-mgr[79476]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 09:45:36 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'osd_perf_query'
Oct 10 09:45:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:36.468+0000 7f48edd15140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 09:45:36 compute-1 ceph-mgr[79476]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 09:45:36 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'osd_support'
Oct 10 09:45:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:36.540+0000 7f48edd15140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 09:45:36 compute-1 ceph-mgr[79476]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 09:45:36 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'pg_autoscaler'
Oct 10 09:45:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:36.606+0000 7f48edd15140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 09:45:36 compute-1 ceph-mgr[79476]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 09:45:36 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'progress'
Oct 10 09:45:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:36.681+0000 7f48edd15140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 09:45:36 compute-1 ceph-mgr[79476]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 09:45:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:36.749+0000 7f48edd15140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 09:45:36 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'prometheus'
Oct 10 09:45:36 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e20 e20: 3 total, 2 up, 3 in
Oct 10 09:45:37 compute-1 ceph-mgr[79476]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 09:45:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:37.085+0000 7f48edd15140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 09:45:37 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'rbd_support'
Oct 10 09:45:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e20 _set_new_cache_sizes cache_size:1020053257 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:45:37 compute-1 ceph-mgr[79476]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 09:45:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:37.175+0000 7f48edd15140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 09:45:37 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'restful'
Oct 10 09:45:37 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'rgw'
Oct 10 09:45:37 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1828731644' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 09:45:37 compute-1 ceph-mon[79167]: osdmap e19: 2 total, 2 up, 2 in
Oct 10 09:45:37 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/3277074974' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fd47bcfa-dab9-466a-b4bb-0169e493040a"}]: dispatch
Oct 10 09:45:37 compute-1 ceph-mon[79167]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fd47bcfa-dab9-466a-b4bb-0169e493040a"}]: dispatch
Oct 10 09:45:37 compute-1 ceph-mon[79167]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fd47bcfa-dab9-466a-b4bb-0169e493040a"}]': finished
Oct 10 09:45:37 compute-1 ceph-mon[79167]: osdmap e20: 3 total, 2 up, 3 in
Oct 10 09:45:37 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:37 compute-1 ceph-mon[79167]: pgmap v72: 5 pgs: 4 unknown, 1 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Oct 10 09:45:37 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3839621145' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 09:45:37 compute-1 ceph-mgr[79476]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 09:45:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:37.582+0000 7f48edd15140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 09:45:37 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'rook'
Oct 10 09:45:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e21 e21: 3 total, 2 up, 3 in
Oct 10 09:45:38 compute-1 ceph-mgr[79476]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 09:45:38 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'selftest'
Oct 10 09:45:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:38.114+0000 7f48edd15140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 09:45:38 compute-1 ceph-mgr[79476]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 09:45:38 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'snap_schedule'
Oct 10 09:45:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:38.179+0000 7f48edd15140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 09:45:38 compute-1 ceph-mgr[79476]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 09:45:38 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'stats'
Oct 10 09:45:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:38.258+0000 7f48edd15140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 09:45:38 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'status'
Oct 10 09:45:38 compute-1 ceph-mgr[79476]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 09:45:38 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'telegraf'
Oct 10 09:45:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:38.398+0000 7f48edd15140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 09:45:38 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1014583551' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 10 09:45:38 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3839621145' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 09:45:38 compute-1 ceph-mon[79167]: osdmap e21: 3 total, 2 up, 3 in
Oct 10 09:45:38 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:38 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 09:45:38 compute-1 ceph-mgr[79476]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 09:45:38 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'telemetry'
Oct 10 09:45:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:38.466+0000 7f48edd15140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 09:45:38 compute-1 ceph-mgr[79476]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 09:45:38 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'test_orchestrator'
Oct 10 09:45:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:38.609+0000 7f48edd15140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 09:45:38 compute-1 ceph-mgr[79476]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 09:45:38 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'volumes'
Oct 10 09:45:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:38.813+0000 7f48edd15140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 09:45:38 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e22 e22: 3 total, 2 up, 3 in
Oct 10 09:45:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 22 pg[7.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:39 compute-1 ceph-mgr[79476]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 09:45:39 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'zabbix'
Oct 10 09:45:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:39.089+0000 7f48edd15140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 09:45:39 compute-1 ceph-mgr[79476]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 09:45:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:45:39.168+0000 7f48edd15140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 09:45:39 compute-1 ceph-mgr[79476]: ms_deliver_dispatch: unhandled message 0x5647ad6bad00 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Oct 10 09:45:39 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2251912187' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 10 09:45:39 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct 10 09:45:39 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2251912187' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 10 09:45:39 compute-1 ceph-mon[79167]: osdmap e22: 3 total, 2 up, 3 in
Oct 10 09:45:39 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:39 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 09:45:39 compute-1 ceph-mon[79167]: Standby manager daemon compute-2.gkrssp started
Oct 10 09:45:39 compute-1 ceph-mon[79167]: pgmap v75: 7 pgs: 2 active+clean, 5 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 10 09:45:39 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 09:45:39 compute-1 ceph-mon[79167]: Standby manager daemon compute-1.rfugxc started
Oct 10 09:45:39 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e23 e23: 3 total, 2 up, 3 in
Oct 10 09:45:39 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 23 pg[2.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=23 pruub=10.466954231s) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active pruub 55.323619843s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:39 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 23 pg[2.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=23 pruub=10.466954231s) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown pruub 55.323619843s@ mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:39 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 23 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:40 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1271642618' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct 10 09:45:40 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:40 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:40 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct 10 09:45:40 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 09:45:40 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1271642618' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct 10 09:45:40 compute-1 ceph-mon[79167]: osdmap e23: 3 total, 2 up, 3 in
Oct 10 09:45:40 compute-1 ceph-mon[79167]: mgrmap e10: compute-0.xkdepb(active, since 2m), standbys: compute-2.gkrssp, compute-1.rfugxc
Oct 10 09:45:40 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:40 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-2.gkrssp", "id": "compute-2.gkrssp"}]: dispatch
Oct 10 09:45:40 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-1.rfugxc", "id": "compute-1.rfugxc"}]: dispatch
Oct 10 09:45:40 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 09:45:40 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e24 e24: 3 total, 2 up, 3 in
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.1e( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.1f( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.1d( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.1c( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.a( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.9( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.8( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.7( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.4( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.2( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.1( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.5( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.3( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.6( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.1b( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.b( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.c( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.d( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.e( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.f( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.10( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.11( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.12( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.13( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.14( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.15( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.16( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.18( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.19( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.1a( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.17( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.1e( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.1f( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.1d( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.9( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.1c( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.8( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.7( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.a( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.2( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.4( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.1( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.0( empty local-lis/les=23/24 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.5( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.b( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.c( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.3( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.d( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.e( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.f( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.1b( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.10( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.6( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.12( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.11( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.13( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.15( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.14( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.16( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.18( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.19( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.1a( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 24 pg[2.17( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [1] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:40 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Oct 10 09:45:40 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Oct 10 09:45:41 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2550341542' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct 10 09:45:41 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct 10 09:45:41 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2550341542' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct 10 09:45:41 compute-1 ceph-mon[79167]: osdmap e24: 3 total, 2 up, 3 in
Oct 10 09:45:41 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:41 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 09:45:41 compute-1 ceph-mon[79167]: 2.1e scrub starts
Oct 10 09:45:41 compute-1 ceph-mon[79167]: pgmap v78: 38 pgs: 6 active+clean, 32 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 10 09:45:41 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 09:45:41 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 09:45:41 compute-1 ceph-mon[79167]: 2.1e scrub ok
Oct 10 09:45:41 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e25 e25: 3 total, 2 up, 3 in
Oct 10 09:45:41 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.1d deep-scrub starts
Oct 10 09:45:41 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.1d deep-scrub ok
Oct 10 09:45:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e25 _set_new_cache_sizes cache_size:1020054711 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:45:42 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1162723757' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct 10 09:45:42 compute-1 ceph-mon[79167]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 10 09:45:42 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct 10 09:45:42 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 09:45:42 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 09:45:42 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1162723757' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct 10 09:45:42 compute-1 ceph-mon[79167]: osdmap e25: 3 total, 2 up, 3 in
Oct 10 09:45:42 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:42 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 09:45:42 compute-1 ceph-mon[79167]: 2.1d deep-scrub starts
Oct 10 09:45:42 compute-1 ceph-mon[79167]: 2.1d deep-scrub ok
Oct 10 09:45:42 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Oct 10 09:45:42 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Oct 10 09:45:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e26 e26: 3 total, 2 up, 3 in
Oct 10 09:45:43 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct 10 09:45:43 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:43 compute-1 ceph-mon[79167]: Deploying daemon osd.2 on compute-2
Oct 10 09:45:43 compute-1 ceph-mon[79167]: 2.1f scrub starts
Oct 10 09:45:43 compute-1 ceph-mon[79167]: 2.1f scrub ok
Oct 10 09:45:43 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/616535579' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct 10 09:45:43 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Oct 10 09:45:43 compute-1 ceph-mon[79167]: osdmap e26: 3 total, 2 up, 3 in
Oct 10 09:45:43 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:43 compute-1 ceph-mon[79167]: pgmap v81: 100 pgs: 38 active+clean, 62 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 10 09:45:43 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 09:45:43 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 09:45:43 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Oct 10 09:45:43 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Oct 10 09:45:43 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e27 e27: 3 total, 2 up, 3 in
Oct 10 09:45:44 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Oct 10 09:45:44 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Oct 10 09:45:44 compute-1 ceph-mon[79167]: 4.1e scrub starts
Oct 10 09:45:44 compute-1 ceph-mon[79167]: 4.1e scrub ok
Oct 10 09:45:44 compute-1 ceph-mon[79167]: 2.9 scrub starts
Oct 10 09:45:44 compute-1 ceph-mon[79167]: 2.9 scrub ok
Oct 10 09:45:44 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/616535579' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct 10 09:45:44 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 09:45:44 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 09:45:44 compute-1 ceph-mon[79167]: osdmap e27: 3 total, 2 up, 3 in
Oct 10 09:45:44 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:44 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e28 e28: 3 total, 2 up, 3 in
Oct 10 09:45:45 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Oct 10 09:45:45 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Oct 10 09:45:46 compute-1 ceph-mon[79167]: 4.1f scrub starts
Oct 10 09:45:46 compute-1 ceph-mon[79167]: 4.1f scrub ok
Oct 10 09:45:46 compute-1 ceph-mon[79167]: 2.1c scrub starts
Oct 10 09:45:46 compute-1 ceph-mon[79167]: 2.1c scrub ok
Oct 10 09:45:46 compute-1 ceph-mon[79167]: pgmap v83: 162 pgs: 2 peering, 98 active+clean, 62 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 10 09:45:46 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2263940004' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct 10 09:45:46 compute-1 ceph-mon[79167]: osdmap e28: 3 total, 2 up, 3 in
Oct 10 09:45:46 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:46 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e29 e29: 3 total, 2 up, 3 in
Oct 10 09:45:46 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Oct 10 09:45:46 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Oct 10 09:45:47 compute-1 ceph-mon[79167]: 3.18 scrub starts
Oct 10 09:45:47 compute-1 ceph-mon[79167]: 3.18 scrub ok
Oct 10 09:45:47 compute-1 ceph-mon[79167]: 2.8 scrub starts
Oct 10 09:45:47 compute-1 ceph-mon[79167]: 2.8 scrub ok
Oct 10 09:45:47 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2263940004' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct 10 09:45:47 compute-1 ceph-mon[79167]: osdmap e29: 3 total, 2 up, 3 in
Oct 10 09:45:47 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:47 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:47 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:47 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:45:47 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.a scrub starts
Oct 10 09:45:47 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.a scrub ok
Oct 10 09:45:48 compute-1 ceph-mon[79167]: 3.17 scrub starts
Oct 10 09:45:48 compute-1 ceph-mon[79167]: 3.17 scrub ok
Oct 10 09:45:48 compute-1 ceph-mon[79167]: 2.7 scrub starts
Oct 10 09:45:48 compute-1 ceph-mon[79167]: 2.7 scrub ok
Oct 10 09:45:48 compute-1 ceph-mon[79167]: pgmap v86: 162 pgs: 2 peering, 98 active+clean, 62 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 10 09:45:48 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2169807361' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct 10 09:45:48 compute-1 ceph-mon[79167]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 10 09:45:48 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e30 e30: 3 total, 2 up, 3 in
Oct 10 09:45:48 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Oct 10 09:45:48 compute-1 sudo[79508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 09:45:48 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Oct 10 09:45:48 compute-1 sudo[79508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:48 compute-1 sudo[79508]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:49 compute-1 ceph-mon[79167]: 4.10 scrub starts
Oct 10 09:45:49 compute-1 ceph-mon[79167]: 4.10 scrub ok
Oct 10 09:45:49 compute-1 ceph-mon[79167]: 2.a scrub starts
Oct 10 09:45:49 compute-1 ceph-mon[79167]: 2.a scrub ok
Oct 10 09:45:49 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2169807361' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct 10 09:45:49 compute-1 ceph-mon[79167]: osdmap e30: 3 total, 2 up, 3 in
Oct 10 09:45:49 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:49 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:49 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:49 compute-1 sudo[79533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:45:49 compute-1 sudo[79533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:49 compute-1 sudo[79533]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:49 compute-1 sudo[79558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 10 09:45:49 compute-1 sudo[79558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:49 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Oct 10 09:45:49 compute-1 podman[79655]: 2025-10-10 09:45:49.906430365 +0000 UTC m=+0.104169537 container exec 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:45:49 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Oct 10 09:45:50 compute-1 podman[79655]: 2025-10-10 09:45:50.024834908 +0000 UTC m=+0.222574030 container exec_died 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:45:50 compute-1 ceph-mon[79167]: 3.16 deep-scrub starts
Oct 10 09:45:50 compute-1 ceph-mon[79167]: 3.16 deep-scrub ok
Oct 10 09:45:50 compute-1 ceph-mon[79167]: 2.4 scrub starts
Oct 10 09:45:50 compute-1 ceph-mon[79167]: 2.4 scrub ok
Oct 10 09:45:50 compute-1 ceph-mon[79167]: pgmap v88: 162 pgs: 2 peering, 98 active+clean, 62 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 10 09:45:50 compute-1 ceph-mon[79167]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 10 09:45:50 compute-1 ceph-mon[79167]: Cluster is now healthy
Oct 10 09:45:50 compute-1 ceph-mon[79167]: from='osd.2 [v2:192.168.122.102:6800/269809354,v1:192.168.122.102:6801/269809354]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 10 09:45:50 compute-1 ceph-mon[79167]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 10 09:45:50 compute-1 ceph-mon[79167]: 2.1 scrub starts
Oct 10 09:45:50 compute-1 ceph-mon[79167]: 2.1 scrub ok
Oct 10 09:45:50 compute-1 sudo[79558]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:50 compute-1 sudo[79743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:45:50 compute-1 sudo[79743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:50 compute-1 sudo[79743]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:50 compute-1 sudo[79768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 09:45:50 compute-1 sudo[79768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:50 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e31 e31: 3 total, 2 up, 3 in
Oct 10 09:45:50 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Oct 10 09:45:50 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Oct 10 09:45:51 compute-1 ceph-mon[79167]: 4.11 deep-scrub starts
Oct 10 09:45:51 compute-1 ceph-mon[79167]: 4.11 deep-scrub ok
Oct 10 09:45:51 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:51 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:51 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:51 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:51 compute-1 ceph-mon[79167]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct 10 09:45:51 compute-1 ceph-mon[79167]: osdmap e31: 3 total, 2 up, 3 in
Oct 10 09:45:51 compute-1 ceph-mon[79167]: from='osd.2 [v2:192.168.122.102:6800/269809354,v1:192.168.122.102:6801/269809354]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct 10 09:45:51 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:51 compute-1 ceph-mon[79167]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct 10 09:45:51 compute-1 ceph-mon[79167]: 2.0 scrub starts
Oct 10 09:45:51 compute-1 ceph-mon[79167]: 2.0 scrub ok
Oct 10 09:45:51 compute-1 ceph-mon[79167]: pgmap v90: 162 pgs: 162 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 10 09:45:51 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 09:45:51 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 09:45:51 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 09:45:51 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 09:45:51 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 09:45:51 compute-1 sudo[79768]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:51 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e32 e32: 3 total, 2 up, 3 in
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.1e( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.118583679s) [0] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.874893188s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.1d( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.124971390s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.881309509s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.1c( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.125008583s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.881362915s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.1e( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.118496895s) [0] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.874893188s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.1c( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.125008583s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881362915s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.1d( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.124971390s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881309509s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.a( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.124822617s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.881538391s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.a( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.124822617s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881538391s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.1b( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.125357628s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.882019043s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.1b( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.125357628s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882019043s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.1f( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.124114037s) [0] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.881301880s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.1f( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.124033928s) [0] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881301880s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.6( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.124660492s) [0] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.882041931s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.6( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.124625206s) [0] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882041931s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.9( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123898506s) [0] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.881355286s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.9( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123868942s) [0] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881355286s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.4( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123972893s) [0] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.881561279s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.4( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123950005s) [0] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881561279s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.1( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123935699s) [0] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.881607056s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.1( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123908043s) [0] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881607056s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.5( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.124008179s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.881774902s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.5( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.124008179s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881774902s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.b( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123831749s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.881782532s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.b( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123831749s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881782532s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.c( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123937607s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.881927490s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.c( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123937607s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881927490s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.d( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123929024s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.882003784s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.d( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123929024s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882003784s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.e( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123857498s) [0] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.881988525s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.e( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123835564s) [0] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881988525s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.f( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123817444s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.881996155s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.f( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123817444s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881996155s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.10( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123723030s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.882041931s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.10( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123723030s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882041931s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.12( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123621941s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.882072449s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.12( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123621941s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882072449s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.15( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123581886s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.882110596s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.15( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123581886s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882110596s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.19( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123530388s) [0] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.882301331s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.13( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123305321s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.882102966s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.19( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123510361s) [0] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882301331s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.13( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123305321s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882102966s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.18( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123338699s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 active pruub 69.882225037s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[2.18( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=32 pruub=13.123338699s) [] r=-1 lpr=32 pi=[23,32)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882225037s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[4.18( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[5.18( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[5.1b( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[4.1a( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[4.1b( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[6.19( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[6.1a( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[5.1c( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[4.e( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[6.d( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[3.1c( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[5.f( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[5.2( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[6.7( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[4.5( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[5.7( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[3.3( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[3.5( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[6.3( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[5.1( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[4.d( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[3.a( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[6.5( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[6.2( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[4.c( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[3.c( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[4.a( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[6.8( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[3.f( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[5.9( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[6.a( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[6.e( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[6.15( empty local-lis/les=0/0 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[3.13( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[5.15( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[4.13( empty local-lis/les=0/0 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[3.14( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[5.10( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[3.16( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[3.10( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[5.16( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[5.11( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[5.1f( empty local-lis/les=0/0 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 32 pg[3.d( empty local-lis/les=0/0 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:45:51 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.2 deep-scrub starts
Oct 10 09:45:51 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.2 deep-scrub ok
Oct 10 09:45:51 compute-1 sudo[79824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 10 09:45:51 compute-1 sudo[79824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:51 compute-1 sudo[79824]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:52 compute-1 sudo[79849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph
Oct 10 09:45:52 compute-1 sudo[79849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:52 compute-1 sudo[79849]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:45:52 compute-1 ceph-mon[79167]: 3.15 deep-scrub starts
Oct 10 09:45:52 compute-1 ceph-mon[79167]: 3.15 deep-scrub ok
Oct 10 09:45:52 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2122384607' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 10 09:45:52 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2122384607' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 10 09:45:52 compute-1 ceph-mon[79167]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Oct 10 09:45:52 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 09:45:52 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 09:45:52 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 09:45:52 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 09:45:52 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 09:45:52 compute-1 ceph-mon[79167]: osdmap e32: 3 total, 2 up, 3 in
Oct 10 09:45:52 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:52 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:52 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:52 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:52 compute-1 ceph-mon[79167]: 2.2 deep-scrub starts
Oct 10 09:45:52 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 10 09:45:52 compute-1 ceph-mon[79167]: Adjusting osd_memory_target on compute-2 to 128.0M
Oct 10 09:45:52 compute-1 ceph-mon[79167]: Unable to set osd_memory_target on compute-2 to 134243532: error parsing value: Value '134243532' is below minimum 939524096
Oct 10 09:45:52 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:52 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:45:52 compute-1 ceph-mon[79167]: Updating compute-0:/etc/ceph/ceph.conf
Oct 10 09:45:52 compute-1 ceph-mon[79167]: Updating compute-1:/etc/ceph/ceph.conf
Oct 10 09:45:52 compute-1 ceph-mon[79167]: Updating compute-2:/etc/ceph/ceph.conf
Oct 10 09:45:52 compute-1 ceph-mon[79167]: 2.2 deep-scrub ok
Oct 10 09:45:52 compute-1 sudo[79874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:45:52 compute-1 sudo[79874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:52 compute-1 sudo[79874]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:52 compute-1 sudo[79899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:45:52 compute-1 sudo[79899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:52 compute-1 sudo[79899]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:52 compute-1 sudo[79924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:45:52 compute-1 sudo[79924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:52 compute-1 sudo[79924]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:52 compute-1 sudo[79972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:45:52 compute-1 sudo[79972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:52 compute-1 sudo[79972]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:52 compute-1 sudo[79997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:45:52 compute-1 sudo[79997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:52 compute-1 sudo[79997]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:52 compute-1 sudo[80022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 10 09:45:52 compute-1 sudo[80022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:52 compute-1 sudo[80022]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:52 compute-1 sudo[80047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:45:52 compute-1 sudo[80047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:52 compute-1 sudo[80047]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:52 compute-1 sudo[80072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:45:52 compute-1 sudo[80072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e33 e33: 3 total, 2 up, 3 in
Oct 10 09:45:52 compute-1 sudo[80072]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[5.10( empty local-lis/les=32/33 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[5.1f( empty local-lis/les=32/33 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[3.16( empty local-lis/les=32/33 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[5.11( empty local-lis/les=32/33 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[3.14( empty local-lis/les=32/33 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[5.15( empty local-lis/les=32/33 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[5.16( empty local-lis/les=32/33 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[3.13( empty local-lis/les=32/33 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[4.13( empty local-lis/les=32/33 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[3.10( empty local-lis/les=32/33 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[3.f( empty local-lis/les=32/33 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[3.c( empty local-lis/les=32/33 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[3.d( empty local-lis/les=32/33 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[5.9( empty local-lis/les=32/33 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[6.a( empty local-lis/les=32/33 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[6.8( empty local-lis/les=32/33 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[4.a( empty local-lis/les=32/33 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[4.d( empty local-lis/les=32/33 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[6.15( empty local-lis/les=32/33 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[3.a( empty local-lis/les=32/33 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[4.5( empty local-lis/les=32/33 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[6.7( empty local-lis/les=32/33 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[5.2( empty local-lis/les=32/33 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[5.7( empty local-lis/les=32/33 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[6.5( empty local-lis/les=32/33 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[3.3( empty local-lis/les=32/33 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[5.1( empty local-lis/les=32/33 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[3.5( empty local-lis/les=32/33 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[4.e( empty local-lis/les=32/33 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[5.f( empty local-lis/les=32/33 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[6.3( empty local-lis/les=32/33 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[6.d( empty local-lis/les=32/33 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[6.2( empty local-lis/les=32/33 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[6.e( empty local-lis/les=32/33 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[5.1c( empty local-lis/les=32/33 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[4.c( empty local-lis/les=32/33 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[3.1c( empty local-lis/les=32/33 n=0 ec=25/17 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[4.1b( empty local-lis/les=32/33 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[6.19( empty local-lis/les=32/33 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[5.1b( empty local-lis/les=32/33 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[4.18( empty local-lis/les=32/33 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[4.1a( empty local-lis/les=32/33 n=0 ec=25/18 lis/c=25/25 les/c/f=26/26/0 sis=32) [1] r=0 lpr=32 pi=[25,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[6.1a( empty local-lis/les=32/33 n=0 ec=27/21 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 33 pg[5.18( empty local-lis/les=32/33 n=0 ec=27/19 lis/c=27/27 les/c/f=28/28/0 sis=32) [1] r=0 lpr=32 pi=[27,32)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:45:52 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.3 deep-scrub starts
Oct 10 09:45:52 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.3 deep-scrub ok
Oct 10 09:45:52 compute-1 sudo[80097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:45:52 compute-1 sudo[80097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:52 compute-1 sudo[80097]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:53 compute-1 sudo[80122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:45:53 compute-1 sudo[80122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:53 compute-1 sudo[80122]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:53 compute-1 sudo[80147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:45:53 compute-1 sudo[80147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:53 compute-1 sudo[80147]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:53 compute-1 ceph-mon[79167]: purged_snaps scrub starts
Oct 10 09:45:53 compute-1 ceph-mon[79167]: purged_snaps scrub ok
Oct 10 09:45:53 compute-1 ceph-mon[79167]: 4.12 scrub starts
Oct 10 09:45:53 compute-1 ceph-mon[79167]: 4.12 scrub ok
Oct 10 09:45:53 compute-1 ceph-mon[79167]: Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:45:53 compute-1 ceph-mon[79167]: Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:45:53 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2975567301' entity='client.admin' 
Oct 10 09:45:53 compute-1 ceph-mon[79167]: Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:45:53 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:53 compute-1 ceph-mon[79167]: osdmap e33: 3 total, 2 up, 3 in
Oct 10 09:45:53 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:53 compute-1 ceph-mon[79167]: 2.3 deep-scrub starts
Oct 10 09:45:53 compute-1 ceph-mon[79167]: 2.3 deep-scrub ok
Oct 10 09:45:53 compute-1 ceph-mon[79167]: pgmap v93: 162 pgs: 44 peering, 118 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 10 09:45:53 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:53 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:53 compute-1 sudo[80195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:45:53 compute-1 sudo[80195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:53 compute-1 sudo[80195]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:53 compute-1 sudo[80220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:45:53 compute-1 sudo[80220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:53 compute-1 sudo[80220]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:53 compute-1 sudo[80245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:45:53 compute-1 sudo[80245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:53 compute-1 sudo[80245]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:53 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Oct 10 09:45:53 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Oct 10 09:45:54 compute-1 ceph-mon[79167]: 3.1f scrub starts
Oct 10 09:45:54 compute-1 ceph-mon[79167]: 3.1f scrub ok
Oct 10 09:45:54 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:54 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:54 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:54 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:54 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:54 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:45:54 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:45:54 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:54 compute-1 ceph-mon[79167]: from='client.14292 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:45:54 compute-1 ceph-mon[79167]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 10 09:45:54 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:54 compute-1 ceph-mon[79167]: Saving service ingress.rgw.default spec with placement count:2
Oct 10 09:45:54 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:54 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:54 compute-1 ceph-mon[79167]: 2.11 scrub starts
Oct 10 09:45:54 compute-1 ceph-mon[79167]: 2.11 scrub ok
Oct 10 09:45:54 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Oct 10 09:45:54 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Oct 10 09:45:55 compute-1 ceph-mon[79167]: 5.19 scrub starts
Oct 10 09:45:55 compute-1 ceph-mon[79167]: 5.19 scrub ok
Oct 10 09:45:55 compute-1 ceph-mon[79167]: OSD bench result of 9119.333889 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 10 09:45:55 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:55 compute-1 ceph-mon[79167]: 2.14 scrub starts
Oct 10 09:45:55 compute-1 ceph-mon[79167]: 2.14 scrub ok
Oct 10 09:45:55 compute-1 ceph-mon[79167]: pgmap v94: 162 pgs: 44 peering, 118 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 10 09:45:55 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e34 e34: 3 total, 3 up, 3 in
Oct 10 09:45:55 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Oct 10 09:45:55 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Oct 10 09:45:56 compute-1 ceph-mon[79167]: 3.1e scrub starts
Oct 10 09:45:56 compute-1 ceph-mon[79167]: 3.1e scrub ok
Oct 10 09:45:56 compute-1 ceph-mon[79167]: osd.2 [v2:192.168.122.102:6800/269809354,v1:192.168.122.102:6801/269809354] boot
Oct 10 09:45:56 compute-1 ceph-mon[79167]: osdmap e34: 3 total, 3 up, 3 in
Oct 10 09:45:56 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:45:56 compute-1 ceph-mon[79167]: from='client.14298 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:45:56 compute-1 ceph-mon[79167]: Saving service node-exporter spec with placement *
Oct 10 09:45:56 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:56 compute-1 ceph-mon[79167]: Saving service grafana spec with placement compute-0;count:1
Oct 10 09:45:56 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:56 compute-1 ceph-mon[79167]: Saving service prometheus spec with placement compute-0;count:1
Oct 10 09:45:56 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:56 compute-1 ceph-mon[79167]: Saving service alertmanager spec with placement compute-0;count:1
Oct 10 09:45:56 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:56 compute-1 ceph-mon[79167]: 2.16 scrub starts
Oct 10 09:45:56 compute-1 ceph-mon[79167]: 2.16 scrub ok
Oct 10 09:45:56 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e35 e35: 3 total, 3 up, 3 in
Oct 10 09:45:56 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.17 deep-scrub starts
Oct 10 09:45:56 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.17 deep-scrub ok
Oct 10 09:45:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:45:57 compute-1 ceph-mon[79167]: 6.18 scrub starts
Oct 10 09:45:57 compute-1 ceph-mon[79167]: 6.18 scrub ok
Oct 10 09:45:57 compute-1 ceph-mon[79167]: osdmap e35: 3 total, 3 up, 3 in
Oct 10 09:45:57 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:57 compute-1 ceph-mon[79167]: 2.17 deep-scrub starts
Oct 10 09:45:57 compute-1 ceph-mon[79167]: 2.17 deep-scrub ok
Oct 10 09:45:57 compute-1 ceph-mon[79167]: pgmap v97: 162 pgs: 80 peering, 82 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:45:57 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2898111592' entity='client.admin' 
Oct 10 09:45:57 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Oct 10 09:45:57 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Oct 10 09:45:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 34 pg[2.18( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.867969990s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882225037s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 34 pg[2.15( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.867820263s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882110596s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 35 pg[2.18( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.867899418s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882225037s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 34 pg[2.13( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.867742062s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882102966s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 35 pg[2.15( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.867722988s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882110596s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 35 pg[2.13( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.867669582s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882102966s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 34 pg[2.12( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.866700649s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882072449s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 35 pg[2.12( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.865193844s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882072449s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 34 pg[2.10( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.865051746s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882041931s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 34 pg[2.f( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.864959717s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881996155s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 35 pg[2.10( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.864971638s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882041931s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 35 pg[2.f( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.864916801s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881996155s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 34 pg[2.d( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.864692211s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882003784s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 35 pg[2.d( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.864659786s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882003784s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 34 pg[2.c( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.864485741s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881927490s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 34 pg[2.b( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.864300728s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881782532s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 35 pg[2.c( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.864441872s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881927490s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 35 pg[2.b( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.864252090s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881782532s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 34 pg[2.5( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.863999844s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881774902s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 35 pg[2.5( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.863965988s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881774902s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 34 pg[2.a( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.863377571s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881538391s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 35 pg[2.a( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.863338947s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881538391s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 34 pg[2.1b( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.863769054s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882019043s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 34 pg[2.1c( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.863054276s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881362915s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 34 pg[2.1d( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.862929821s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881309509s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:45:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 35 pg[2.1c( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.863005161s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881362915s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 35 pg[2.1d( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.862899303s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.881309509s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 35 pg[2.1b( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=34 pruub=6.863616943s) [2] r=-1 lpr=34 pi=[23,34)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 69.882019043s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:45:58 compute-1 sudo[80270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 09:45:58 compute-1 sudo[80270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:45:58 compute-1 sudo[80270]: pam_unix(sudo:session): session closed for user root
Oct 10 09:45:58 compute-1 ceph-mon[79167]: 5.1d scrub starts
Oct 10 09:45:58 compute-1 ceph-mon[79167]: 5.1d scrub ok
Oct 10 09:45:58 compute-1 ceph-mon[79167]: 6.17 scrub starts
Oct 10 09:45:58 compute-1 ceph-mon[79167]: 6.17 scrub ok
Oct 10 09:45:58 compute-1 ceph-mon[79167]: 2.1a scrub starts
Oct 10 09:45:58 compute-1 ceph-mon[79167]: 2.1a scrub ok
Oct 10 09:45:58 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:58 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:58 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1237849469' entity='client.admin' 
Oct 10 09:45:58 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Oct 10 09:45:58 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Oct 10 09:45:59 compute-1 ceph-mon[79167]: 6.1f scrub starts
Oct 10 09:45:59 compute-1 ceph-mon[79167]: 6.1f scrub ok
Oct 10 09:45:59 compute-1 ceph-mon[79167]: 4.14 scrub starts
Oct 10 09:45:59 compute-1 ceph-mon[79167]: 4.14 scrub ok
Oct 10 09:45:59 compute-1 ceph-mon[79167]: Reconfiguring mon.compute-0 (monmap changed)...
Oct 10 09:45:59 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 10 09:45:59 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 10 09:45:59 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:59 compute-1 ceph-mon[79167]: Reconfiguring daemon mon.compute-0 on compute-0
Oct 10 09:45:59 compute-1 ceph-mon[79167]: 5.1f scrub starts
Oct 10 09:45:59 compute-1 ceph-mon[79167]: 5.1f scrub ok
Oct 10 09:45:59 compute-1 ceph-mon[79167]: pgmap v98: 162 pgs: 36 peering, 126 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:45:59 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:59 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:45:59 compute-1 ceph-mon[79167]: Reconfiguring mgr.compute-0.xkdepb (monmap changed)...
Oct 10 09:45:59 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.xkdepb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 10 09:45:59 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 09:45:59 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:45:59 compute-1 ceph-mon[79167]: Reconfiguring daemon mgr.compute-0.xkdepb on compute-0
Oct 10 09:45:59 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3410162506' entity='client.admin' 
Oct 10 09:45:59 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Oct 10 09:45:59 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Oct 10 09:46:00 compute-1 ceph-mon[79167]: 6.c scrub starts
Oct 10 09:46:00 compute-1 ceph-mon[79167]: 6.c scrub ok
Oct 10 09:46:00 compute-1 ceph-mon[79167]: 3.e scrub starts
Oct 10 09:46:00 compute-1 ceph-mon[79167]: 3.e scrub ok
Oct 10 09:46:00 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:00 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:00 compute-1 ceph-mon[79167]: Reconfiguring crash.compute-0 (monmap changed)...
Oct 10 09:46:00 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 10 09:46:00 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:00 compute-1 ceph-mon[79167]: Reconfiguring daemon crash.compute-0 on compute-0
Oct 10 09:46:00 compute-1 ceph-mon[79167]: 5.10 scrub starts
Oct 10 09:46:00 compute-1 ceph-mon[79167]: 5.10 scrub ok
Oct 10 09:46:00 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Oct 10 09:46:00 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Oct 10 09:46:01 compute-1 sudo[80324]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pafmsiifvndvvzphyratzqrrcnjrrxig ; /usr/bin/python3'
Oct 10 09:46:01 compute-1 sudo[80324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:46:01 compute-1 sudo[80316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:46:01 compute-1 sudo[80316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:01 compute-1 sudo[80316]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:01 compute-1 sudo[80346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:46:01 compute-1 sudo[80346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:01 compute-1 ceph-mon[79167]: 4.f scrub starts
Oct 10 09:46:01 compute-1 ceph-mon[79167]: 4.f scrub ok
Oct 10 09:46:01 compute-1 ceph-mon[79167]: 5.8 scrub starts
Oct 10 09:46:01 compute-1 ceph-mon[79167]: 5.8 scrub ok
Oct 10 09:46:01 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:01 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:01 compute-1 ceph-mon[79167]: Reconfiguring osd.0 (monmap changed)...
Oct 10 09:46:01 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 10 09:46:01 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:01 compute-1 ceph-mon[79167]: Reconfiguring daemon osd.0 on compute-0
Oct 10 09:46:01 compute-1 ceph-mon[79167]: 5.11 scrub starts
Oct 10 09:46:01 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2517476288' entity='client.admin' 
Oct 10 09:46:01 compute-1 ceph-mon[79167]: 5.11 scrub ok
Oct 10 09:46:01 compute-1 ceph-mon[79167]: pgmap v99: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:01 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:01 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:01 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 10 09:46:01 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:01 compute-1 python3[80338]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:46:01 compute-1 sudo[80324]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:01 compute-1 podman[80399]: 2025-10-10 09:46:01.718667772 +0000 UTC m=+0.041957020 container create 2fd169cbe8acdaf965af1c3b70ef9f4821297cfffa0962ce1b8ee41a33209af9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 10 09:46:01 compute-1 systemd[1]: Started libpod-conmon-2fd169cbe8acdaf965af1c3b70ef9f4821297cfffa0962ce1b8ee41a33209af9.scope.
Oct 10 09:46:01 compute-1 systemd[1]: Started libcrun container.
Oct 10 09:46:01 compute-1 podman[80399]: 2025-10-10 09:46:01.700850449 +0000 UTC m=+0.024139717 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:46:01 compute-1 podman[80399]: 2025-10-10 09:46:01.819761675 +0000 UTC m=+0.143050993 container init 2fd169cbe8acdaf965af1c3b70ef9f4821297cfffa0962ce1b8ee41a33209af9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_turing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Oct 10 09:46:01 compute-1 podman[80399]: 2025-10-10 09:46:01.828889623 +0000 UTC m=+0.152178891 container start 2fd169cbe8acdaf965af1c3b70ef9f4821297cfffa0962ce1b8ee41a33209af9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_turing, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 10 09:46:01 compute-1 podman[80399]: 2025-10-10 09:46:01.834993209 +0000 UTC m=+0.158282517 container attach 2fd169cbe8acdaf965af1c3b70ef9f4821297cfffa0962ce1b8ee41a33209af9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_turing, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:46:01 compute-1 zealous_turing[80415]: 167 167
Oct 10 09:46:01 compute-1 systemd[1]: libpod-2fd169cbe8acdaf965af1c3b70ef9f4821297cfffa0962ce1b8ee41a33209af9.scope: Deactivated successfully.
Oct 10 09:46:01 compute-1 podman[80399]: 2025-10-10 09:46:01.837345313 +0000 UTC m=+0.160634541 container died 2fd169cbe8acdaf965af1c3b70ef9f4821297cfffa0962ce1b8ee41a33209af9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_turing, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:46:01 compute-1 systemd[1]: var-lib-containers-storage-overlay-8e2853bb9aa7abe75bd64a412927eac2cb74ab8cda85c7648de019d843c890e1-merged.mount: Deactivated successfully.
Oct 10 09:46:01 compute-1 podman[80399]: 2025-10-10 09:46:01.880601786 +0000 UTC m=+0.203891024 container remove 2fd169cbe8acdaf965af1c3b70ef9f4821297cfffa0962ce1b8ee41a33209af9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_turing, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:46:01 compute-1 systemd[1]: libpod-conmon-2fd169cbe8acdaf965af1c3b70ef9f4821297cfffa0962ce1b8ee41a33209af9.scope: Deactivated successfully.
Oct 10 09:46:01 compute-1 sudo[80346]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:01 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Oct 10 09:46:01 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Oct 10 09:46:02 compute-1 sudo[80432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:46:02 compute-1 sudo[80432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:02 compute-1 sudo[80432]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:46:02 compute-1 sudo[80457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:46:02 compute-1 sudo[80457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:02 compute-1 ceph-mon[79167]: 3.4 scrub starts
Oct 10 09:46:02 compute-1 ceph-mon[79167]: 3.4 scrub ok
Oct 10 09:46:02 compute-1 ceph-mon[79167]: 3.11 scrub starts
Oct 10 09:46:02 compute-1 ceph-mon[79167]: 3.11 scrub ok
Oct 10 09:46:02 compute-1 ceph-mon[79167]: Reconfiguring crash.compute-1 (monmap changed)...
Oct 10 09:46:02 compute-1 ceph-mon[79167]: Reconfiguring daemon crash.compute-1 on compute-1
Oct 10 09:46:02 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:02 compute-1 ceph-mon[79167]: 5.15 scrub starts
Oct 10 09:46:02 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:02 compute-1 ceph-mon[79167]: Reconfiguring osd.1 (monmap changed)...
Oct 10 09:46:02 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 10 09:46:02 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:02 compute-1 ceph-mon[79167]: Reconfiguring daemon osd.1 on compute-1
Oct 10 09:46:02 compute-1 ceph-mon[79167]: 5.15 scrub ok
Oct 10 09:46:02 compute-1 podman[80498]: 2025-10-10 09:46:02.562953482 +0000 UTC m=+0.072703784 container create 1b29046c0f466a5dfc0784d3cd6b9c10772663205a3fc9d991dc856bfcf2a7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_borg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 10 09:46:02 compute-1 systemd[1]: Started libpod-conmon-1b29046c0f466a5dfc0784d3cd6b9c10772663205a3fc9d991dc856bfcf2a7a9.scope.
Oct 10 09:46:02 compute-1 podman[80498]: 2025-10-10 09:46:02.532623969 +0000 UTC m=+0.042374311 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:46:02 compute-1 systemd[1]: Started libcrun container.
Oct 10 09:46:02 compute-1 podman[80498]: 2025-10-10 09:46:02.667792236 +0000 UTC m=+0.177542538 container init 1b29046c0f466a5dfc0784d3cd6b9c10772663205a3fc9d991dc856bfcf2a7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_borg, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 10 09:46:02 compute-1 podman[80498]: 2025-10-10 09:46:02.678220219 +0000 UTC m=+0.187970521 container start 1b29046c0f466a5dfc0784d3cd6b9c10772663205a3fc9d991dc856bfcf2a7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_borg, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 10 09:46:02 compute-1 podman[80498]: 2025-10-10 09:46:02.682407432 +0000 UTC m=+0.192157744 container attach 1b29046c0f466a5dfc0784d3cd6b9c10772663205a3fc9d991dc856bfcf2a7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_borg, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct 10 09:46:02 compute-1 ecstatic_borg[80514]: 167 167
Oct 10 09:46:02 compute-1 systemd[1]: libpod-1b29046c0f466a5dfc0784d3cd6b9c10772663205a3fc9d991dc856bfcf2a7a9.scope: Deactivated successfully.
Oct 10 09:46:02 compute-1 podman[80498]: 2025-10-10 09:46:02.686439992 +0000 UTC m=+0.196190254 container died 1b29046c0f466a5dfc0784d3cd6b9c10772663205a3fc9d991dc856bfcf2a7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_borg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 10 09:46:02 compute-1 systemd[1]: var-lib-containers-storage-overlay-98989988f73f190002547e2c425aac850cb1e9b76b25cadb003e493c30f82224-merged.mount: Deactivated successfully.
Oct 10 09:46:02 compute-1 podman[80498]: 2025-10-10 09:46:02.729565282 +0000 UTC m=+0.239315554 container remove 1b29046c0f466a5dfc0784d3cd6b9c10772663205a3fc9d991dc856bfcf2a7a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_borg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct 10 09:46:02 compute-1 systemd[1]: libpod-conmon-1b29046c0f466a5dfc0784d3cd6b9c10772663205a3fc9d991dc856bfcf2a7a9.scope: Deactivated successfully.
Oct 10 09:46:02 compute-1 sudo[80457]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:02 compute-1 sudo[80539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:46:02 compute-1 sudo[80539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:02 compute-1 sudo[80539]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:03 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Oct 10 09:46:03 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Oct 10 09:46:03 compute-1 sudo[80564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:46:03 compute-1 sudo[80564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:03 compute-1 ceph-mon[79167]: 4.4 scrub starts
Oct 10 09:46:03 compute-1 ceph-mon[79167]: 4.4 scrub ok
Oct 10 09:46:03 compute-1 ceph-mon[79167]: 5.b scrub starts
Oct 10 09:46:03 compute-1 ceph-mon[79167]: 5.b scrub ok
Oct 10 09:46:03 compute-1 ceph-mon[79167]: from='client.? ' entity='client.admin' 
Oct 10 09:46:03 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:03 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:03 compute-1 ceph-mon[79167]: Reconfiguring mon.compute-1 (monmap changed)...
Oct 10 09:46:03 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 10 09:46:03 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 10 09:46:03 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:03 compute-1 ceph-mon[79167]: Reconfiguring daemon mon.compute-1 on compute-1
Oct 10 09:46:03 compute-1 ceph-mon[79167]: pgmap v100: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:03 compute-1 ceph-mon[79167]: 3.14 scrub starts
Oct 10 09:46:03 compute-1 ceph-mon[79167]: 3.14 scrub ok
Oct 10 09:46:03 compute-1 podman[80607]: 2025-10-10 09:46:03.472317026 +0000 UTC m=+0.067934804 container create b162c740d9c8b9e0041b895368c69e4e9899360ea1805688cb060d2dcd88b28d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_borg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2)
Oct 10 09:46:03 compute-1 systemd[1]: Started libpod-conmon-b162c740d9c8b9e0041b895368c69e4e9899360ea1805688cb060d2dcd88b28d.scope.
Oct 10 09:46:03 compute-1 podman[80607]: 2025-10-10 09:46:03.443137025 +0000 UTC m=+0.038754853 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:46:03 compute-1 systemd[1]: Started libcrun container.
Oct 10 09:46:03 compute-1 podman[80607]: 2025-10-10 09:46:03.560747066 +0000 UTC m=+0.156364884 container init b162c740d9c8b9e0041b895368c69e4e9899360ea1805688cb060d2dcd88b28d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:46:03 compute-1 podman[80607]: 2025-10-10 09:46:03.57451582 +0000 UTC m=+0.170133608 container start b162c740d9c8b9e0041b895368c69e4e9899360ea1805688cb060d2dcd88b28d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_borg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Oct 10 09:46:03 compute-1 podman[80607]: 2025-10-10 09:46:03.578419275 +0000 UTC m=+0.174037063 container attach b162c740d9c8b9e0041b895368c69e4e9899360ea1805688cb060d2dcd88b28d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 10 09:46:03 compute-1 zen_borg[80623]: 167 167
Oct 10 09:46:03 compute-1 systemd[1]: libpod-b162c740d9c8b9e0041b895368c69e4e9899360ea1805688cb060d2dcd88b28d.scope: Deactivated successfully.
Oct 10 09:46:03 compute-1 podman[80607]: 2025-10-10 09:46:03.584234933 +0000 UTC m=+0.179852721 container died b162c740d9c8b9e0041b895368c69e4e9899360ea1805688cb060d2dcd88b28d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:46:03 compute-1 systemd[1]: var-lib-containers-storage-overlay-6ef8610fdf4cb2c21bba4dddaa82f2acff09b3f99f4b1107782956b12e15881e-merged.mount: Deactivated successfully.
Oct 10 09:46:03 compute-1 podman[80607]: 2025-10-10 09:46:03.639147873 +0000 UTC m=+0.234765651 container remove b162c740d9c8b9e0041b895368c69e4e9899360ea1805688cb060d2dcd88b28d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_borg, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:46:03 compute-1 systemd[1]: libpod-conmon-b162c740d9c8b9e0041b895368c69e4e9899360ea1805688cb060d2dcd88b28d.scope: Deactivated successfully.
Oct 10 09:46:03 compute-1 sudo[80564]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:03 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Oct 10 09:46:03 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Oct 10 09:46:04 compute-1 ceph-mon[79167]: 5.5 scrub starts
Oct 10 09:46:04 compute-1 ceph-mon[79167]: 5.5 scrub ok
Oct 10 09:46:04 compute-1 ceph-mon[79167]: 5.d scrub starts
Oct 10 09:46:04 compute-1 ceph-mon[79167]: 5.d scrub ok
Oct 10 09:46:04 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:04 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:04 compute-1 ceph-mon[79167]: Reconfiguring mon.compute-2 (monmap changed)...
Oct 10 09:46:04 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 10 09:46:04 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 10 09:46:04 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:04 compute-1 ceph-mon[79167]: Reconfiguring daemon mon.compute-2 on compute-2
Oct 10 09:46:04 compute-1 ceph-mon[79167]: 5.16 scrub starts
Oct 10 09:46:04 compute-1 ceph-mon[79167]: 5.16 scrub ok
Oct 10 09:46:04 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/699590867' entity='client.admin' 
Oct 10 09:46:04 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:04 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:04 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.gkrssp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 10 09:46:04 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 09:46:04 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:04 compute-1 sudo[80640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:46:04 compute-1 sudo[80640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:04 compute-1 sudo[80640]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:04 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Oct 10 09:46:05 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Oct 10 09:46:05 compute-1 sudo[80665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 10 09:46:05 compute-1 sudo[80665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:05 compute-1 ceph-mon[79167]: 6.6 scrub starts
Oct 10 09:46:05 compute-1 ceph-mon[79167]: 6.6 scrub ok
Oct 10 09:46:05 compute-1 ceph-mon[79167]: 4.8 scrub starts
Oct 10 09:46:05 compute-1 ceph-mon[79167]: 4.8 scrub ok
Oct 10 09:46:05 compute-1 ceph-mon[79167]: Reconfiguring mgr.compute-2.gkrssp (monmap changed)...
Oct 10 09:46:05 compute-1 ceph-mon[79167]: Reconfiguring daemon mgr.compute-2.gkrssp on compute-2
Oct 10 09:46:05 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:05 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:05 compute-1 ceph-mon[79167]: pgmap v101: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:05 compute-1 ceph-mon[79167]: 4.13 scrub starts
Oct 10 09:46:05 compute-1 ceph-mon[79167]: 4.13 scrub ok
Oct 10 09:46:05 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1171706134' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct 10 09:46:05 compute-1 podman[80762]: 2025-10-10 09:46:05.767986418 +0000 UTC m=+0.081315208 container exec 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 10 09:46:05 compute-1 podman[80762]: 2025-10-10 09:46:05.900804071 +0000 UTC m=+0.214132811 container exec_died 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Oct 10 09:46:05 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.c scrub starts
Oct 10 09:46:05 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.c scrub ok
Oct 10 09:46:06 compute-1 sudo[80665]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:06 compute-1 ceph-mon[79167]: 3.2 scrub starts
Oct 10 09:46:06 compute-1 ceph-mon[79167]: 3.2 scrub ok
Oct 10 09:46:06 compute-1 ceph-mon[79167]: 4.9 scrub starts
Oct 10 09:46:06 compute-1 ceph-mon[79167]: 4.9 scrub ok
Oct 10 09:46:06 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:06 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:06 compute-1 ceph-mon[79167]: 3.c scrub starts
Oct 10 09:46:06 compute-1 ceph-mon[79167]: 3.c scrub ok
Oct 10 09:46:06 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1171706134' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct 10 09:46:06 compute-1 ceph-mon[79167]: mgrmap e11: compute-0.xkdepb(active, since 2m), standbys: compute-2.gkrssp, compute-1.rfugxc
Oct 10 09:46:06 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:06 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:06 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.f deep-scrub starts
Oct 10 09:46:06 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.f deep-scrub ok
Oct 10 09:46:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:46:07 compute-1 ceph-mon[79167]: 3.1 deep-scrub starts
Oct 10 09:46:07 compute-1 ceph-mon[79167]: 3.1 deep-scrub ok
Oct 10 09:46:07 compute-1 ceph-mon[79167]: 3.0 scrub starts
Oct 10 09:46:07 compute-1 ceph-mon[79167]: 3.0 scrub ok
Oct 10 09:46:07 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:07 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:46:07 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:07 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:46:07 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:46:07 compute-1 ceph-mon[79167]: from='mgr.14122 192.168.122.100:0/2212424954' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:07 compute-1 ceph-mon[79167]: pgmap v102: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:07 compute-1 ceph-mon[79167]: 3.f deep-scrub starts
Oct 10 09:46:07 compute-1 ceph-mon[79167]: 3.f deep-scrub ok
Oct 10 09:46:07 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/520827948' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct 10 09:46:07 compute-1 ceph-mgr[79476]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 10 09:46:07 compute-1 ceph-mgr[79476]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 10 09:46:07 compute-1 sshd-session[72093]: Connection closed by 192.168.122.100 port 53060
Oct 10 09:46:07 compute-1 sshd-session[72120]: Connection closed by 192.168.122.100 port 56860
Oct 10 09:46:07 compute-1 sshd-session[72149]: Connection closed by 192.168.122.100 port 56870
Oct 10 09:46:07 compute-1 sshd-session[71977]: Connection closed by 192.168.122.100 port 53046
Oct 10 09:46:07 compute-1 sshd-session[71919]: Connection closed by 192.168.122.100 port 53018
Oct 10 09:46:07 compute-1 sshd-session[71860]: Connection closed by 192.168.122.100 port 52998
Oct 10 09:46:07 compute-1 sshd-session[71948]: Connection closed by 192.168.122.100 port 53030
Oct 10 09:46:07 compute-1 sshd-session[71861]: Connection closed by 192.168.122.100 port 53006
Oct 10 09:46:07 compute-1 sshd-session[71890]: Connection closed by 192.168.122.100 port 53012
Oct 10 09:46:07 compute-1 sshd-session[72035]: Connection closed by 192.168.122.100 port 53056
Oct 10 09:46:07 compute-1 sshd-session[72064]: Connection closed by 192.168.122.100 port 53058
Oct 10 09:46:07 compute-1 sshd-session[72006]: Connection closed by 192.168.122.100 port 53050
Oct 10 09:46:07 compute-1 sshd-session[71837]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 10 09:46:07 compute-1 sshd-session[72146]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 10 09:46:07 compute-1 sshd-session[71845]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 10 09:46:07 compute-1 sshd-session[71974]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 10 09:46:07 compute-1 sshd-session[72117]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 10 09:46:07 compute-1 sshd-session[71887]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 10 09:46:07 compute-1 sshd-session[71945]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 10 09:46:07 compute-1 systemd[1]: session-33.scope: Deactivated successfully.
Oct 10 09:46:07 compute-1 systemd[1]: session-33.scope: Consumed 1min 12.729s CPU time.
Oct 10 09:46:07 compute-1 sshd-session[72032]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 10 09:46:07 compute-1 sshd-session[72061]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 10 09:46:07 compute-1 sshd-session[71916]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 10 09:46:07 compute-1 systemd-logind[789]: Session 33 logged out. Waiting for processes to exit.
Oct 10 09:46:07 compute-1 sshd-session[72090]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 10 09:46:07 compute-1 systemd[1]: session-29.scope: Deactivated successfully.
Oct 10 09:46:07 compute-1 sshd-session[72003]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 10 09:46:07 compute-1 systemd[1]: session-23.scope: Deactivated successfully.
Oct 10 09:46:07 compute-1 systemd[1]: session-24.scope: Deactivated successfully.
Oct 10 09:46:07 compute-1 systemd[1]: session-27.scope: Deactivated successfully.
Oct 10 09:46:07 compute-1 systemd[1]: session-26.scope: Deactivated successfully.
Oct 10 09:46:07 compute-1 systemd-logind[789]: Session 29 logged out. Waiting for processes to exit.
Oct 10 09:46:07 compute-1 systemd[1]: session-28.scope: Deactivated successfully.
Oct 10 09:46:07 compute-1 systemd[1]: session-32.scope: Deactivated successfully.
Oct 10 09:46:07 compute-1 systemd[1]: session-31.scope: Deactivated successfully.
Oct 10 09:46:07 compute-1 systemd[1]: session-21.scope: Deactivated successfully.
Oct 10 09:46:07 compute-1 systemd[1]: session-25.scope: Deactivated successfully.
Oct 10 09:46:07 compute-1 systemd-logind[789]: Session 23 logged out. Waiting for processes to exit.
Oct 10 09:46:07 compute-1 systemd[1]: session-30.scope: Deactivated successfully.
Oct 10 09:46:07 compute-1 systemd-logind[789]: Session 26 logged out. Waiting for processes to exit.
Oct 10 09:46:07 compute-1 systemd-logind[789]: Session 27 logged out. Waiting for processes to exit.
Oct 10 09:46:07 compute-1 systemd-logind[789]: Session 24 logged out. Waiting for processes to exit.
Oct 10 09:46:07 compute-1 systemd-logind[789]: Session 31 logged out. Waiting for processes to exit.
Oct 10 09:46:07 compute-1 systemd-logind[789]: Session 32 logged out. Waiting for processes to exit.
Oct 10 09:46:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: ignoring --setuser ceph since I am not root
Oct 10 09:46:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: ignoring --setgroup ceph since I am not root
Oct 10 09:46:07 compute-1 systemd-logind[789]: Session 28 logged out. Waiting for processes to exit.
Oct 10 09:46:07 compute-1 systemd-logind[789]: Session 25 logged out. Waiting for processes to exit.
Oct 10 09:46:07 compute-1 ceph-mgr[79476]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 10 09:46:07 compute-1 ceph-mgr[79476]: pidfile_write: ignore empty --pid-file
Oct 10 09:46:07 compute-1 systemd-logind[789]: Session 21 logged out. Waiting for processes to exit.
Oct 10 09:46:07 compute-1 systemd-logind[789]: Session 30 logged out. Waiting for processes to exit.
Oct 10 09:46:07 compute-1 systemd-logind[789]: Removed session 33.
Oct 10 09:46:07 compute-1 systemd-logind[789]: Removed session 29.
Oct 10 09:46:07 compute-1 systemd-logind[789]: Removed session 23.
Oct 10 09:46:07 compute-1 systemd-logind[789]: Removed session 24.
Oct 10 09:46:07 compute-1 systemd-logind[789]: Removed session 27.
Oct 10 09:46:07 compute-1 systemd-logind[789]: Removed session 26.
Oct 10 09:46:07 compute-1 systemd-logind[789]: Removed session 28.
Oct 10 09:46:07 compute-1 systemd-logind[789]: Removed session 32.
Oct 10 09:46:07 compute-1 systemd-logind[789]: Removed session 31.
Oct 10 09:46:07 compute-1 systemd-logind[789]: Removed session 21.
Oct 10 09:46:07 compute-1 systemd-logind[789]: Removed session 25.
Oct 10 09:46:07 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'alerts'
Oct 10 09:46:07 compute-1 systemd-logind[789]: Removed session 30.
Oct 10 09:46:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:07.776+0000 7f5eeb717140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 09:46:07 compute-1 ceph-mgr[79476]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 09:46:07 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'balancer'
Oct 10 09:46:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:07.854+0000 7f5eeb717140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 09:46:07 compute-1 ceph-mgr[79476]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 09:46:07 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'cephadm'
Oct 10 09:46:07 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.d scrub starts
Oct 10 09:46:07 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.d scrub ok
Oct 10 09:46:08 compute-1 ceph-mon[79167]: 6.4 scrub starts
Oct 10 09:46:08 compute-1 ceph-mon[79167]: 6.4 scrub ok
Oct 10 09:46:08 compute-1 ceph-mon[79167]: 5.0 scrub starts
Oct 10 09:46:08 compute-1 ceph-mon[79167]: 5.0 scrub ok
Oct 10 09:46:08 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/520827948' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct 10 09:46:08 compute-1 ceph-mon[79167]: mgrmap e12: compute-0.xkdepb(active, since 2m), standbys: compute-2.gkrssp, compute-1.rfugxc
Oct 10 09:46:08 compute-1 ceph-mon[79167]: 3.d scrub starts
Oct 10 09:46:08 compute-1 ceph-mon[79167]: 3.d scrub ok
Oct 10 09:46:08 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'crash'
Oct 10 09:46:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:08.636+0000 7f5eeb717140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 09:46:08 compute-1 ceph-mgr[79476]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 09:46:08 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'dashboard'
Oct 10 09:46:08 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Oct 10 09:46:08 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Oct 10 09:46:09 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'devicehealth'
Oct 10 09:46:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:09.233+0000 7f5eeb717140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 09:46:09 compute-1 ceph-mgr[79476]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 09:46:09 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'diskprediction_local'
Oct 10 09:46:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 10 09:46:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 10 09:46:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]:   from numpy import show_config as show_numpy_config
Oct 10 09:46:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:09.394+0000 7f5eeb717140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 09:46:09 compute-1 ceph-mgr[79476]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 09:46:09 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'influx'
Oct 10 09:46:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:09.461+0000 7f5eeb717140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 09:46:09 compute-1 ceph-mgr[79476]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 09:46:09 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'insights'
Oct 10 09:46:09 compute-1 ceph-mon[79167]: 6.0 scrub starts
Oct 10 09:46:09 compute-1 ceph-mon[79167]: 6.0 scrub ok
Oct 10 09:46:09 compute-1 ceph-mon[79167]: 4.2 scrub starts
Oct 10 09:46:09 compute-1 ceph-mon[79167]: 4.2 scrub ok
Oct 10 09:46:09 compute-1 ceph-mon[79167]: 5.3 scrub starts
Oct 10 09:46:09 compute-1 ceph-mon[79167]: 5.3 scrub ok
Oct 10 09:46:09 compute-1 ceph-mon[79167]: 5.9 scrub starts
Oct 10 09:46:09 compute-1 ceph-mon[79167]: 5.9 scrub ok
Oct 10 09:46:09 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'iostat'
Oct 10 09:46:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:09.597+0000 7f5eeb717140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 09:46:09 compute-1 ceph-mgr[79476]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 09:46:09 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'k8sevents'
Oct 10 09:46:09 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'localpool'
Oct 10 09:46:09 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Oct 10 09:46:09 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Oct 10 09:46:10 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'mds_autoscaler'
Oct 10 09:46:10 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'mirroring'
Oct 10 09:46:10 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'nfs'
Oct 10 09:46:10 compute-1 ceph-mon[79167]: 4.19 scrub starts
Oct 10 09:46:10 compute-1 ceph-mon[79167]: 4.19 scrub ok
Oct 10 09:46:10 compute-1 ceph-mon[79167]: 3.6 scrub starts
Oct 10 09:46:10 compute-1 ceph-mon[79167]: 3.6 scrub ok
Oct 10 09:46:10 compute-1 ceph-mon[79167]: 3.10 scrub starts
Oct 10 09:46:10 compute-1 ceph-mon[79167]: 3.10 scrub ok
Oct 10 09:46:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:10.565+0000 7f5eeb717140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 09:46:10 compute-1 ceph-mgr[79476]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 09:46:10 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'orchestrator'
Oct 10 09:46:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:10.771+0000 7f5eeb717140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 09:46:10 compute-1 ceph-mgr[79476]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 09:46:10 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'osd_perf_query'
Oct 10 09:46:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:10.848+0000 7f5eeb717140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 09:46:10 compute-1 ceph-mgr[79476]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 09:46:10 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'osd_support'
Oct 10 09:46:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:10.913+0000 7f5eeb717140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 09:46:10 compute-1 ceph-mgr[79476]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 09:46:10 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'pg_autoscaler'
Oct 10 09:46:10 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Oct 10 09:46:10 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Oct 10 09:46:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:11.001+0000 7f5eeb717140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 09:46:11 compute-1 ceph-mgr[79476]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 09:46:11 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'progress'
Oct 10 09:46:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:11.072+0000 7f5eeb717140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 09:46:11 compute-1 ceph-mgr[79476]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 09:46:11 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'prometheus'
Oct 10 09:46:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:11.388+0000 7f5eeb717140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 09:46:11 compute-1 ceph-mgr[79476]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 09:46:11 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'rbd_support'
Oct 10 09:46:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:11.479+0000 7f5eeb717140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 09:46:11 compute-1 ceph-mgr[79476]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 09:46:11 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'restful'
Oct 10 09:46:11 compute-1 ceph-mon[79167]: 5.1a scrub starts
Oct 10 09:46:11 compute-1 ceph-mon[79167]: 5.1a scrub ok
Oct 10 09:46:11 compute-1 ceph-mon[79167]: 3.7 scrub starts
Oct 10 09:46:11 compute-1 ceph-mon[79167]: 3.7 scrub ok
Oct 10 09:46:11 compute-1 ceph-mon[79167]: 3.13 scrub starts
Oct 10 09:46:11 compute-1 ceph-mon[79167]: 3.13 scrub ok
Oct 10 09:46:11 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'rgw'
Oct 10 09:46:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:11.924+0000 7f5eeb717140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 09:46:11 compute-1 ceph-mgr[79476]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 09:46:11 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'rook'
Oct 10 09:46:12 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.a scrub starts
Oct 10 09:46:12 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.a scrub ok
Oct 10 09:46:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:46:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:12.458+0000 7f5eeb717140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 09:46:12 compute-1 ceph-mgr[79476]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 09:46:12 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'selftest'
Oct 10 09:46:12 compute-1 ceph-mon[79167]: 4.1 scrub starts
Oct 10 09:46:12 compute-1 ceph-mon[79167]: 4.1 scrub ok
Oct 10 09:46:12 compute-1 ceph-mon[79167]: 4.0 scrub starts
Oct 10 09:46:12 compute-1 ceph-mon[79167]: 4.0 scrub ok
Oct 10 09:46:12 compute-1 ceph-mon[79167]: 6.a scrub starts
Oct 10 09:46:12 compute-1 ceph-mon[79167]: 6.a scrub ok
Oct 10 09:46:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:12.525+0000 7f5eeb717140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 09:46:12 compute-1 ceph-mgr[79476]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 09:46:12 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'snap_schedule'
Oct 10 09:46:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:12.601+0000 7f5eeb717140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 09:46:12 compute-1 ceph-mgr[79476]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 09:46:12 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'stats'
Oct 10 09:46:12 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'status'
Oct 10 09:46:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:12.745+0000 7f5eeb717140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 09:46:12 compute-1 ceph-mgr[79476]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 09:46:12 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'telegraf'
Oct 10 09:46:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:12.821+0000 7f5eeb717140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 09:46:12 compute-1 ceph-mgr[79476]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 09:46:12 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'telemetry'
Oct 10 09:46:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:12.978+0000 7f5eeb717140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 09:46:12 compute-1 ceph-mgr[79476]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 09:46:12 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'test_orchestrator'
Oct 10 09:46:13 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Oct 10 09:46:13 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Oct 10 09:46:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:13.189+0000 7f5eeb717140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 09:46:13 compute-1 ceph-mgr[79476]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 09:46:13 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'volumes'
Oct 10 09:46:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:13.455+0000 7f5eeb717140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 09:46:13 compute-1 ceph-mgr[79476]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 09:46:13 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'zabbix'
Oct 10 09:46:13 compute-1 ceph-mon[79167]: 6.1b scrub starts
Oct 10 09:46:13 compute-1 ceph-mon[79167]: 6.1b scrub ok
Oct 10 09:46:13 compute-1 ceph-mon[79167]: 4.7 scrub starts
Oct 10 09:46:13 compute-1 ceph-mon[79167]: 4.7 scrub ok
Oct 10 09:46:13 compute-1 ceph-mon[79167]: 6.8 scrub starts
Oct 10 09:46:13 compute-1 ceph-mon[79167]: 6.8 scrub ok
Oct 10 09:46:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:13.521+0000 7f5eeb717140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 09:46:13 compute-1 ceph-mgr[79476]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 09:46:13 compute-1 ceph-mgr[79476]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:46:13 compute-1 ceph-mgr[79476]: mgr load Constructed class from module: dashboard
Oct 10 09:46:13 compute-1 ceph-mgr[79476]: [dashboard INFO root] server: ssl=no host=192.168.122.101 port=8443
Oct 10 09:46:13 compute-1 ceph-mgr[79476]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct 10 09:46:13 compute-1 ceph-mgr[79476]: [dashboard INFO root] Starting engine...
Oct 10 09:46:13 compute-1 ceph-mgr[79476]: ms_deliver_dispatch: unhandled message 0x55dd163c3860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Oct 10 09:46:13 compute-1 ceph-mgr[79476]: [dashboard INFO root] Engine started...
Oct 10 09:46:13 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e36 e36: 3 total, 3 up, 3 in
Oct 10 09:46:14 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.a scrub starts
Oct 10 09:46:14 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.a scrub ok
Oct 10 09:46:14 compute-1 sshd-session[80894]: Accepted publickey for ceph-admin from 192.168.122.100 port 43040 ssh2: RSA SHA256:iFwOnwcB2x2Q1gpAWZobZa2jCZZy75CuUHv4ViVnHA0
Oct 10 09:46:14 compute-1 systemd-logind[789]: New session 34 of user ceph-admin.
Oct 10 09:46:14 compute-1 systemd[1]: Started Session 34 of User ceph-admin.
Oct 10 09:46:14 compute-1 sshd-session[80894]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:46:14 compute-1 ceph-mon[79167]: 4.6 scrub starts
Oct 10 09:46:14 compute-1 ceph-mon[79167]: 4.6 scrub ok
Oct 10 09:46:14 compute-1 ceph-mon[79167]: Standby manager daemon compute-1.rfugxc restarted
Oct 10 09:46:14 compute-1 ceph-mon[79167]: Standby manager daemon compute-1.rfugxc started
Oct 10 09:46:14 compute-1 ceph-mon[79167]: 5.6 scrub starts
Oct 10 09:46:14 compute-1 ceph-mon[79167]: 5.6 scrub ok
Oct 10 09:46:14 compute-1 ceph-mon[79167]: Standby manager daemon compute-2.gkrssp restarted
Oct 10 09:46:14 compute-1 ceph-mon[79167]: Standby manager daemon compute-2.gkrssp started
Oct 10 09:46:14 compute-1 ceph-mon[79167]: Active manager daemon compute-0.xkdepb restarted
Oct 10 09:46:14 compute-1 ceph-mon[79167]: Activating manager daemon compute-0.xkdepb
Oct 10 09:46:14 compute-1 ceph-mon[79167]: osdmap e36: 3 total, 3 up, 3 in
Oct 10 09:46:14 compute-1 ceph-mon[79167]: mgrmap e13: compute-0.xkdepb(active, starting, since 0.0350206s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:14 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 10 09:46:14 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:46:14 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 10 09:46:14 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-0.xkdepb", "id": "compute-0.xkdepb"}]: dispatch
Oct 10 09:46:14 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-1.rfugxc", "id": "compute-1.rfugxc"}]: dispatch
Oct 10 09:46:14 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-2.gkrssp", "id": "compute-2.gkrssp"}]: dispatch
Oct 10 09:46:14 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:46:14 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:46:14 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:46:14 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 10 09:46:14 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 10 09:46:14 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 10 09:46:14 compute-1 ceph-mon[79167]: Manager daemon compute-0.xkdepb is now available
Oct 10 09:46:14 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/mirror_snapshot_schedule"}]: dispatch
Oct 10 09:46:14 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/trash_purge_schedule"}]: dispatch
Oct 10 09:46:14 compute-1 sudo[80898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:46:14 compute-1 sudo[80898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:14 compute-1 sudo[80898]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:14 compute-1 sudo[80923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 10 09:46:14 compute-1 sudo[80923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:15 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.d scrub starts
Oct 10 09:46:15 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.d scrub ok
Oct 10 09:46:15 compute-1 podman[81020]: 2025-10-10 09:46:15.320303473 +0000 UTC m=+0.074498453 container exec 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct 10 09:46:15 compute-1 podman[81020]: 2025-10-10 09:46:15.442729344 +0000 UTC m=+0.196924284 container exec_died 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 10 09:46:15 compute-1 ceph-mon[79167]: 3.1b scrub starts
Oct 10 09:46:15 compute-1 ceph-mon[79167]: 3.1b scrub ok
Oct 10 09:46:15 compute-1 ceph-mon[79167]: 4.a scrub starts
Oct 10 09:46:15 compute-1 ceph-mon[79167]: 4.a scrub ok
Oct 10 09:46:15 compute-1 ceph-mon[79167]: 5.c scrub starts
Oct 10 09:46:15 compute-1 ceph-mon[79167]: 5.c scrub ok
Oct 10 09:46:15 compute-1 ceph-mon[79167]: mgrmap e14: compute-0.xkdepb(active, since 1.07712s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:15 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:15 compute-1 sudo[80923]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:15 compute-1 sudo[81106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:46:15 compute-1 sudo[81106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:15 compute-1 sudo[81106]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:16 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.a scrub starts
Oct 10 09:46:16 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.a scrub ok
Oct 10 09:46:16 compute-1 sudo[81131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 09:46:16 compute-1 sudo[81131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:16 compute-1 ceph-mon[79167]: 5.e deep-scrub starts
Oct 10 09:46:16 compute-1 ceph-mon[79167]: 5.e deep-scrub ok
Oct 10 09:46:16 compute-1 ceph-mon[79167]: 4.d scrub starts
Oct 10 09:46:16 compute-1 ceph-mon[79167]: 4.d scrub ok
Oct 10 09:46:16 compute-1 ceph-mon[79167]: [10/Oct/2025:09:46:15] ENGINE Bus STARTING
Oct 10 09:46:16 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:16 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:16 compute-1 ceph-mon[79167]: [10/Oct/2025:09:46:15] ENGINE Serving on https://192.168.122.100:7150
Oct 10 09:46:16 compute-1 ceph-mon[79167]: [10/Oct/2025:09:46:15] ENGINE Client ('192.168.122.100', 44336) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 10 09:46:16 compute-1 ceph-mon[79167]: 6.f scrub starts
Oct 10 09:46:16 compute-1 ceph-mon[79167]: 6.f scrub ok
Oct 10 09:46:16 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:16 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:16 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:16 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:16 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:16 compute-1 ceph-mon[79167]: 3.a scrub starts
Oct 10 09:46:16 compute-1 sudo[81131]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:16 compute-1 sudo[81187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:46:16 compute-1 sudo[81187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:16 compute-1 sudo[81187]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:16 compute-1 sudo[81212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Oct 10 09:46:16 compute-1 sudo[81212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:17 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Oct 10 09:46:17 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Oct 10 09:46:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:46:17 compute-1 sudo[81212]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:17 compute-1 sudo[81255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 10 09:46:17 compute-1 sudo[81255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:17 compute-1 sudo[81255]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:17 compute-1 sudo[81280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph
Oct 10 09:46:17 compute-1 sudo[81280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:17 compute-1 sudo[81280]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:17 compute-1 sudo[81305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:46:17 compute-1 sudo[81305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:17 compute-1 sudo[81305]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:17 compute-1 ceph-mon[79167]: [10/Oct/2025:09:46:15] ENGINE Serving on http://192.168.122.100:8765
Oct 10 09:46:17 compute-1 ceph-mon[79167]: [10/Oct/2025:09:46:15] ENGINE Bus STARTED
Oct 10 09:46:17 compute-1 ceph-mon[79167]: 4.1c scrub starts
Oct 10 09:46:17 compute-1 ceph-mon[79167]: 4.1c scrub ok
Oct 10 09:46:17 compute-1 ceph-mon[79167]: pgmap v4: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:17 compute-1 ceph-mon[79167]: from='client.14385 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:46:17 compute-1 ceph-mon[79167]: 3.a scrub ok
Oct 10 09:46:17 compute-1 ceph-mon[79167]: 3.b deep-scrub starts
Oct 10 09:46:17 compute-1 ceph-mon[79167]: 3.b deep-scrub ok
Oct 10 09:46:17 compute-1 ceph-mon[79167]: mgrmap e15: compute-0.xkdepb(active, since 2s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:17 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:17 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:17 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 10 09:46:17 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:17 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:17 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 10 09:46:17 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:17 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:17 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:17 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 10 09:46:17 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:17 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:46:17 compute-1 sudo[81330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:46:17 compute-1 sudo[81330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:17 compute-1 sudo[81330]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:17 compute-1 sudo[81355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:46:17 compute-1 sudo[81355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:17 compute-1 sudo[81355]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:17 compute-1 sudo[81403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:46:17 compute-1 sudo[81403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:17 compute-1 sudo[81403]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:17 compute-1 sudo[81428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:46:17 compute-1 sudo[81428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:17 compute-1 sudo[81428]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:18 compute-1 sudo[81453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 10 09:46:18 compute-1 sudo[81453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:18 compute-1 sudo[81453]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:18 compute-1 sudo[81478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:46:18 compute-1 sudo[81478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:18 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Oct 10 09:46:18 compute-1 sudo[81478]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:18 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Oct 10 09:46:18 compute-1 sudo[81503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:46:18 compute-1 sudo[81503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:18 compute-1 sudo[81503]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:18 compute-1 sudo[81528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:46:18 compute-1 sudo[81528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:18 compute-1 sudo[81528]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:18 compute-1 sudo[81553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:46:18 compute-1 sudo[81553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:18 compute-1 sudo[81553]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:18 compute-1 sudo[81578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:46:18 compute-1 sudo[81578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:18 compute-1 sudo[81578]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:18 compute-1 ceph-mon[79167]: 6.1e scrub starts
Oct 10 09:46:18 compute-1 ceph-mon[79167]: 6.1e scrub ok
Oct 10 09:46:18 compute-1 ceph-mon[79167]: Adjusting osd_memory_target on compute-2 to 128.0M
Oct 10 09:46:18 compute-1 ceph-mon[79167]: Unable to set osd_memory_target on compute-2 to 134243532: error parsing value: Value '134243532' is below minimum 939524096
Oct 10 09:46:18 compute-1 ceph-mon[79167]: Adjusting osd_memory_target on compute-0 to 128.0M
Oct 10 09:46:18 compute-1 ceph-mon[79167]: Unable to set osd_memory_target on compute-0 to 134240665: error parsing value: Value '134240665' is below minimum 939524096
Oct 10 09:46:18 compute-1 ceph-mon[79167]: from='client.14397 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:46:18 compute-1 ceph-mon[79167]: 6.15 scrub starts
Oct 10 09:46:18 compute-1 ceph-mon[79167]: 6.15 scrub ok
Oct 10 09:46:18 compute-1 ceph-mon[79167]: Adjusting osd_memory_target on compute-1 to 128.0M
Oct 10 09:46:18 compute-1 ceph-mon[79167]: Unable to set osd_memory_target on compute-1 to 134243532: error parsing value: Value '134243532' is below minimum 939524096
Oct 10 09:46:18 compute-1 ceph-mon[79167]: Updating compute-0:/etc/ceph/ceph.conf
Oct 10 09:46:18 compute-1 ceph-mon[79167]: Updating compute-1:/etc/ceph/ceph.conf
Oct 10 09:46:18 compute-1 ceph-mon[79167]: Updating compute-2:/etc/ceph/ceph.conf
Oct 10 09:46:18 compute-1 ceph-mon[79167]: 4.b scrub starts
Oct 10 09:46:18 compute-1 ceph-mon[79167]: 4.b scrub ok
Oct 10 09:46:18 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:18 compute-1 sudo[81626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:46:18 compute-1 sudo[81626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:18 compute-1 sudo[81626]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:18 compute-1 sudo[81651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:46:18 compute-1 sudo[81651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:18 compute-1 sudo[81651]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:18 compute-1 sudo[81676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:46:18 compute-1 sudo[81676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:18 compute-1 sudo[81676]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:18 compute-1 sudo[81701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 10 09:46:18 compute-1 sudo[81701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:18 compute-1 sudo[81701]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:18 compute-1 sudo[81726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph
Oct 10 09:46:19 compute-1 sudo[81726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:19 compute-1 sudo[81726]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:19 compute-1 sudo[81751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new
Oct 10 09:46:19 compute-1 sudo[81751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:19 compute-1 sudo[81751]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:19 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Oct 10 09:46:19 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Oct 10 09:46:19 compute-1 sudo[81776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:46:19 compute-1 sudo[81776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:19 compute-1 sudo[81776]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:19 compute-1 sudo[81801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new
Oct 10 09:46:19 compute-1 sudo[81801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:19 compute-1 sudo[81801]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:19 compute-1 sudo[81849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new
Oct 10 09:46:19 compute-1 sudo[81849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:19 compute-1 sudo[81849]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:19 compute-1 sudo[81874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new
Oct 10 09:46:19 compute-1 sudo[81874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:19 compute-1 sudo[81874]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:19 compute-1 ceph-mon[79167]: 3.1a scrub starts
Oct 10 09:46:19 compute-1 ceph-mon[79167]: 3.1a scrub ok
Oct 10 09:46:19 compute-1 ceph-mon[79167]: Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:46:19 compute-1 ceph-mon[79167]: pgmap v5: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:19 compute-1 ceph-mon[79167]: Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:46:19 compute-1 ceph-mon[79167]: from='client.14403 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:46:19 compute-1 ceph-mon[79167]: Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:46:19 compute-1 ceph-mon[79167]: 6.7 scrub starts
Oct 10 09:46:19 compute-1 ceph-mon[79167]: 6.7 scrub ok
Oct 10 09:46:19 compute-1 ceph-mon[79167]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:46:19 compute-1 ceph-mon[79167]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:46:19 compute-1 ceph-mon[79167]: mgrmap e16: compute-0.xkdepb(active, since 4s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:19 compute-1 ceph-mon[79167]: 5.a scrub starts
Oct 10 09:46:19 compute-1 ceph-mon[79167]: 5.a scrub ok
Oct 10 09:46:19 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:19 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:19 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:19 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:19 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:19 compute-1 sudo[81899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Oct 10 09:46:19 compute-1 sudo[81899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:19 compute-1 sudo[81899]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:19 compute-1 sudo[81924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:46:19 compute-1 sudo[81924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:19 compute-1 sudo[81924]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:19 compute-1 sudo[81949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:46:19 compute-1 sudo[81949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:19 compute-1 sudo[81949]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:19 compute-1 sudo[81974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new
Oct 10 09:46:19 compute-1 sudo[81974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:19 compute-1 sudo[81974]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:20 compute-1 sudo[81999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:46:20 compute-1 sudo[81999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:20 compute-1 sudo[81999]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:20 compute-1 sudo[82024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new
Oct 10 09:46:20 compute-1 sudo[82024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:20 compute-1 sudo[82024]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:20 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Oct 10 09:46:20 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Oct 10 09:46:20 compute-1 sudo[82072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new
Oct 10 09:46:20 compute-1 sudo[82072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:20 compute-1 sudo[82072]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:20 compute-1 sudo[82097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new
Oct 10 09:46:20 compute-1 sudo[82097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:20 compute-1 sudo[82097]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:20 compute-1 sudo[82122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:46:20 compute-1 sudo[82122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:20 compute-1 sudo[82122]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:20 compute-1 ceph-mgr[79476]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 10 09:46:20 compute-1 ceph-mgr[79476]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 10 09:46:20 compute-1 ceph-mgr[79476]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 10 09:46:20 compute-1 ceph-mgr[79476]: mgr respawn  1: '-n'
Oct 10 09:46:20 compute-1 ceph-mgr[79476]: mgr respawn  2: 'mgr.compute-1.rfugxc'
Oct 10 09:46:20 compute-1 ceph-mgr[79476]: mgr respawn  3: '-f'
Oct 10 09:46:20 compute-1 ceph-mgr[79476]: mgr respawn  4: '--setuser'
Oct 10 09:46:20 compute-1 ceph-mgr[79476]: mgr respawn  5: 'ceph'
Oct 10 09:46:20 compute-1 ceph-mgr[79476]: mgr respawn  6: '--setgroup'
Oct 10 09:46:20 compute-1 ceph-mgr[79476]: mgr respawn  7: 'ceph'
Oct 10 09:46:20 compute-1 ceph-mgr[79476]: mgr respawn  8: '--default-log-to-file=false'
Oct 10 09:46:20 compute-1 ceph-mgr[79476]: mgr respawn  9: '--default-log-to-journald=true'
Oct 10 09:46:20 compute-1 ceph-mgr[79476]: mgr respawn  10: '--default-log-to-stderr=false'
Oct 10 09:46:20 compute-1 ceph-mon[79167]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:46:20 compute-1 ceph-mon[79167]: from='client.24217 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:46:20 compute-1 ceph-mon[79167]: 4.15 scrub starts
Oct 10 09:46:20 compute-1 ceph-mon[79167]: 4.15 scrub ok
Oct 10 09:46:20 compute-1 ceph-mon[79167]: Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:46:20 compute-1 ceph-mon[79167]: Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:46:20 compute-1 ceph-mon[79167]: 5.7 scrub starts
Oct 10 09:46:20 compute-1 ceph-mon[79167]: 5.7 scrub ok
Oct 10 09:46:20 compute-1 ceph-mon[79167]: 6.9 deep-scrub starts
Oct 10 09:46:20 compute-1 ceph-mon[79167]: 6.9 deep-scrub ok
Oct 10 09:46:20 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1314314115' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct 10 09:46:20 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:20 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:20 compute-1 ceph-mon[79167]: from='mgr.14355 192.168.122.100:0/2142097187' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:20 compute-1 sshd-session[80897]: Connection closed by 192.168.122.100 port 43040
Oct 10 09:46:20 compute-1 sshd-session[80894]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 10 09:46:20 compute-1 systemd-logind[789]: Session 34 logged out. Waiting for processes to exit.
Oct 10 09:46:20 compute-1 systemd[1]: session-34.scope: Deactivated successfully.
Oct 10 09:46:20 compute-1 systemd[1]: session-34.scope: Consumed 6.311s CPU time.
Oct 10 09:46:20 compute-1 systemd-logind[789]: Removed session 34.
Oct 10 09:46:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: ignoring --setuser ceph since I am not root
Oct 10 09:46:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: ignoring --setgroup ceph since I am not root
Oct 10 09:46:20 compute-1 ceph-mgr[79476]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 10 09:46:20 compute-1 ceph-mgr[79476]: pidfile_write: ignore empty --pid-file
Oct 10 09:46:20 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'alerts'
Oct 10 09:46:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:20.936+0000 7f43bcc25140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 09:46:20 compute-1 ceph-mgr[79476]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 09:46:20 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'balancer'
Oct 10 09:46:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:21.013+0000 7f43bcc25140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 09:46:21 compute-1 ceph-mgr[79476]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 09:46:21 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'cephadm'
Oct 10 09:46:21 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Oct 10 09:46:21 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Oct 10 09:46:21 compute-1 ceph-mon[79167]: 4.3 scrub starts
Oct 10 09:46:21 compute-1 ceph-mon[79167]: 4.3 scrub ok
Oct 10 09:46:21 compute-1 ceph-mon[79167]: 5.2 scrub starts
Oct 10 09:46:21 compute-1 ceph-mon[79167]: 5.2 scrub ok
Oct 10 09:46:21 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1314314115' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct 10 09:46:21 compute-1 ceph-mon[79167]: mgrmap e17: compute-0.xkdepb(active, since 6s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:21 compute-1 ceph-mon[79167]: 6.b scrub starts
Oct 10 09:46:21 compute-1 ceph-mon[79167]: 6.b scrub ok
Oct 10 09:46:21 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2158945969' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct 10 09:46:21 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'crash'
Oct 10 09:46:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:21.861+0000 7f43bcc25140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 09:46:21 compute-1 ceph-mgr[79476]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 09:46:21 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'dashboard'
Oct 10 09:46:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:46:22 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.3 deep-scrub starts
Oct 10 09:46:22 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.3 deep-scrub ok
Oct 10 09:46:22 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'devicehealth'
Oct 10 09:46:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:22.460+0000 7f43bcc25140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 09:46:22 compute-1 ceph-mgr[79476]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 09:46:22 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'diskprediction_local'
Oct 10 09:46:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 10 09:46:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 10 09:46:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]:   from numpy import show_config as show_numpy_config
Oct 10 09:46:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:22.612+0000 7f43bcc25140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 09:46:22 compute-1 ceph-mgr[79476]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 09:46:22 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'influx'
Oct 10 09:46:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:22.678+0000 7f43bcc25140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 09:46:22 compute-1 ceph-mgr[79476]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 09:46:22 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'insights'
Oct 10 09:46:22 compute-1 ceph-mon[79167]: 4.1d scrub starts
Oct 10 09:46:22 compute-1 ceph-mon[79167]: 4.1d scrub ok
Oct 10 09:46:22 compute-1 ceph-mon[79167]: 6.5 scrub starts
Oct 10 09:46:22 compute-1 ceph-mon[79167]: 6.5 scrub ok
Oct 10 09:46:22 compute-1 ceph-mon[79167]: 4.17 scrub starts
Oct 10 09:46:22 compute-1 ceph-mon[79167]: 4.17 scrub ok
Oct 10 09:46:22 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2158945969' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct 10 09:46:22 compute-1 ceph-mon[79167]: mgrmap e18: compute-0.xkdepb(active, since 7s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:22 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'iostat'
Oct 10 09:46:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:22.803+0000 7f43bcc25140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 09:46:22 compute-1 ceph-mgr[79476]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 09:46:22 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'k8sevents'
Oct 10 09:46:23 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'localpool'
Oct 10 09:46:23 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'mds_autoscaler'
Oct 10 09:46:23 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Oct 10 09:46:23 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Oct 10 09:46:23 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'mirroring'
Oct 10 09:46:23 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'nfs'
Oct 10 09:46:23 compute-1 ceph-mon[79167]: 3.1d scrub starts
Oct 10 09:46:23 compute-1 ceph-mon[79167]: 3.1d scrub ok
Oct 10 09:46:23 compute-1 ceph-mon[79167]: 3.3 deep-scrub starts
Oct 10 09:46:23 compute-1 ceph-mon[79167]: 3.3 deep-scrub ok
Oct 10 09:46:23 compute-1 ceph-mon[79167]: 4.16 scrub starts
Oct 10 09:46:23 compute-1 ceph-mon[79167]: 4.16 scrub ok
Oct 10 09:46:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:23.726+0000 7f43bcc25140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 09:46:23 compute-1 ceph-mgr[79476]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 09:46:23 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'orchestrator'
Oct 10 09:46:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:23.935+0000 7f43bcc25140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 09:46:23 compute-1 ceph-mgr[79476]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 09:46:23 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'osd_perf_query'
Oct 10 09:46:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:24.012+0000 7f43bcc25140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 09:46:24 compute-1 ceph-mgr[79476]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 09:46:24 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'osd_support'
Oct 10 09:46:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:24.077+0000 7f43bcc25140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 09:46:24 compute-1 ceph-mgr[79476]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 09:46:24 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'pg_autoscaler'
Oct 10 09:46:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:24.150+0000 7f43bcc25140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 09:46:24 compute-1 ceph-mgr[79476]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 09:46:24 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'progress'
Oct 10 09:46:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:24.215+0000 7f43bcc25140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 09:46:24 compute-1 ceph-mgr[79476]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 09:46:24 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'prometheus'
Oct 10 09:46:24 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Oct 10 09:46:24 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Oct 10 09:46:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:24.528+0000 7f43bcc25140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 09:46:24 compute-1 ceph-mgr[79476]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 09:46:24 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'rbd_support'
Oct 10 09:46:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:24.622+0000 7f43bcc25140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 09:46:24 compute-1 ceph-mgr[79476]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 09:46:24 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'restful'
Oct 10 09:46:24 compute-1 ceph-mon[79167]: 3.9 deep-scrub starts
Oct 10 09:46:24 compute-1 ceph-mon[79167]: 3.9 deep-scrub ok
Oct 10 09:46:24 compute-1 ceph-mon[79167]: 5.1 scrub starts
Oct 10 09:46:24 compute-1 ceph-mon[79167]: 5.1 scrub ok
Oct 10 09:46:24 compute-1 ceph-mon[79167]: 5.17 scrub starts
Oct 10 09:46:24 compute-1 ceph-mon[79167]: 5.17 scrub ok
Oct 10 09:46:24 compute-1 ceph-mon[79167]: 5.12 scrub starts
Oct 10 09:46:24 compute-1 ceph-mon[79167]: 5.12 scrub ok
Oct 10 09:46:24 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'rgw'
Oct 10 09:46:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:25.047+0000 7f43bcc25140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 09:46:25 compute-1 ceph-mgr[79476]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 09:46:25 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'rook'
Oct 10 09:46:25 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.e deep-scrub starts
Oct 10 09:46:25 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.e deep-scrub ok
Oct 10 09:46:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:25.561+0000 7f43bcc25140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 09:46:25 compute-1 ceph-mgr[79476]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 09:46:25 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'selftest'
Oct 10 09:46:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:25.630+0000 7f43bcc25140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 09:46:25 compute-1 ceph-mgr[79476]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 09:46:25 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'snap_schedule'
Oct 10 09:46:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:25.715+0000 7f43bcc25140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 09:46:25 compute-1 ceph-mgr[79476]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 09:46:25 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'stats'
Oct 10 09:46:25 compute-1 ceph-mon[79167]: 3.5 scrub starts
Oct 10 09:46:25 compute-1 ceph-mon[79167]: 3.5 scrub ok
Oct 10 09:46:25 compute-1 ceph-mon[79167]: 6.14 scrub starts
Oct 10 09:46:25 compute-1 ceph-mon[79167]: 6.14 scrub ok
Oct 10 09:46:25 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'status'
Oct 10 09:46:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:25.860+0000 7f43bcc25140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 09:46:25 compute-1 ceph-mgr[79476]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 09:46:25 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'telegraf'
Oct 10 09:46:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:25.931+0000 7f43bcc25140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 09:46:25 compute-1 ceph-mgr[79476]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 09:46:25 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'telemetry'
Oct 10 09:46:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:26.084+0000 7f43bcc25140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 09:46:26 compute-1 ceph-mgr[79476]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 09:46:26 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'test_orchestrator'
Oct 10 09:46:26 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.f scrub starts
Oct 10 09:46:26 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.f scrub ok
Oct 10 09:46:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:26.294+0000 7f43bcc25140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 09:46:26 compute-1 ceph-mgr[79476]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 09:46:26 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'volumes'
Oct 10 09:46:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:26.539+0000 7f43bcc25140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 09:46:26 compute-1 ceph-mgr[79476]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 09:46:26 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'zabbix'
Oct 10 09:46:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:26.602+0000 7f43bcc25140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 09:46:26 compute-1 ceph-mgr[79476]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 09:46:26 compute-1 ceph-mgr[79476]: ms_deliver_dispatch: unhandled message 0x5616dd955860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Oct 10 09:46:26 compute-1 ceph-mgr[79476]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 10 09:46:26 compute-1 ceph-mgr[79476]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 10 09:46:26 compute-1 ceph-mgr[79476]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 10 09:46:26 compute-1 ceph-mgr[79476]: mgr respawn  1: '-n'
Oct 10 09:46:26 compute-1 ceph-mgr[79476]: mgr respawn  2: 'mgr.compute-1.rfugxc'
Oct 10 09:46:26 compute-1 ceph-mgr[79476]: mgr respawn  3: '-f'
Oct 10 09:46:26 compute-1 ceph-mgr[79476]: mgr respawn  4: '--setuser'
Oct 10 09:46:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: ignoring --setuser ceph since I am not root
Oct 10 09:46:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: ignoring --setgroup ceph since I am not root
Oct 10 09:46:26 compute-1 ceph-mgr[79476]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 10 09:46:26 compute-1 ceph-mgr[79476]: pidfile_write: ignore empty --pid-file
Oct 10 09:46:26 compute-1 ceph-mon[79167]: 5.4 scrub starts
Oct 10 09:46:26 compute-1 ceph-mon[79167]: 5.4 scrub ok
Oct 10 09:46:26 compute-1 ceph-mon[79167]: 4.e deep-scrub starts
Oct 10 09:46:26 compute-1 ceph-mon[79167]: 4.e deep-scrub ok
Oct 10 09:46:26 compute-1 ceph-mon[79167]: 3.12 scrub starts
Oct 10 09:46:26 compute-1 ceph-mon[79167]: 3.12 scrub ok
Oct 10 09:46:26 compute-1 ceph-mon[79167]: Standby manager daemon compute-1.rfugxc restarted
Oct 10 09:46:26 compute-1 ceph-mon[79167]: Standby manager daemon compute-1.rfugxc started
Oct 10 09:46:26 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'alerts'
Oct 10 09:46:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:26.861+0000 7f2ea663d140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 09:46:26 compute-1 ceph-mgr[79476]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 09:46:26 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'balancer'
Oct 10 09:46:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:26.940+0000 7f2ea663d140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 09:46:26 compute-1 ceph-mgr[79476]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 09:46:26 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'cephadm'
Oct 10 09:46:26 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e37 e37: 3 total, 3 up, 3 in
Oct 10 09:46:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:46:27 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Oct 10 09:46:27 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Oct 10 09:46:27 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'crash'
Oct 10 09:46:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:27.675+0000 7f2ea663d140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 09:46:27 compute-1 ceph-mgr[79476]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 09:46:27 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'dashboard'
Oct 10 09:46:27 compute-1 ceph-mon[79167]: 6.12 deep-scrub starts
Oct 10 09:46:27 compute-1 ceph-mon[79167]: 6.12 deep-scrub ok
Oct 10 09:46:27 compute-1 ceph-mon[79167]: 5.f scrub starts
Oct 10 09:46:27 compute-1 ceph-mon[79167]: 5.f scrub ok
Oct 10 09:46:27 compute-1 ceph-mon[79167]: mgrmap e19: compute-0.xkdepb(active, since 12s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:27 compute-1 ceph-mon[79167]: 5.14 scrub starts
Oct 10 09:46:27 compute-1 ceph-mon[79167]: 5.14 scrub ok
Oct 10 09:46:27 compute-1 ceph-mon[79167]: Active manager daemon compute-0.xkdepb restarted
Oct 10 09:46:27 compute-1 ceph-mon[79167]: Activating manager daemon compute-0.xkdepb
Oct 10 09:46:27 compute-1 ceph-mon[79167]: osdmap e37: 3 total, 3 up, 3 in
Oct 10 09:46:27 compute-1 ceph-mon[79167]: mgrmap e20: compute-0.xkdepb(active, starting, since 0.0290775s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:27 compute-1 ceph-mon[79167]: Standby manager daemon compute-2.gkrssp restarted
Oct 10 09:46:27 compute-1 ceph-mon[79167]: Standby manager daemon compute-2.gkrssp started
Oct 10 09:46:28 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'devicehealth'
Oct 10 09:46:28 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Oct 10 09:46:28 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Oct 10 09:46:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:28.260+0000 7f2ea663d140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 09:46:28 compute-1 ceph-mgr[79476]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 09:46:28 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'diskprediction_local'
Oct 10 09:46:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 10 09:46:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 10 09:46:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]:   from numpy import show_config as show_numpy_config
Oct 10 09:46:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:28.433+0000 7f2ea663d140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 09:46:28 compute-1 ceph-mgr[79476]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 09:46:28 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'influx'
Oct 10 09:46:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:28.497+0000 7f2ea663d140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 09:46:28 compute-1 ceph-mgr[79476]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 09:46:28 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'insights'
Oct 10 09:46:28 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'iostat'
Oct 10 09:46:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:28.625+0000 7f2ea663d140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 09:46:28 compute-1 ceph-mgr[79476]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 09:46:28 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'k8sevents'
Oct 10 09:46:28 compute-1 ceph-mon[79167]: 6.1c scrub starts
Oct 10 09:46:28 compute-1 ceph-mon[79167]: 6.1c scrub ok
Oct 10 09:46:28 compute-1 ceph-mon[79167]: 6.3 scrub starts
Oct 10 09:46:28 compute-1 ceph-mon[79167]: 6.3 scrub ok
Oct 10 09:46:28 compute-1 ceph-mon[79167]: 6.16 scrub starts
Oct 10 09:46:28 compute-1 ceph-mon[79167]: 6.16 scrub ok
Oct 10 09:46:28 compute-1 ceph-mon[79167]: mgrmap e21: compute-0.xkdepb(active, starting, since 1.0465s), standbys: compute-2.gkrssp, compute-1.rfugxc
Oct 10 09:46:28 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'localpool'
Oct 10 09:46:29 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'mds_autoscaler'
Oct 10 09:46:29 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.d scrub starts
Oct 10 09:46:29 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.d scrub ok
Oct 10 09:46:29 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'mirroring'
Oct 10 09:46:29 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'nfs'
Oct 10 09:46:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:29.566+0000 7f2ea663d140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 09:46:29 compute-1 ceph-mgr[79476]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 09:46:29 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'orchestrator'
Oct 10 09:46:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:29.769+0000 7f2ea663d140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 09:46:29 compute-1 ceph-mgr[79476]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 09:46:29 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'osd_perf_query'
Oct 10 09:46:29 compute-1 ceph-mon[79167]: 5.13 scrub starts
Oct 10 09:46:29 compute-1 ceph-mon[79167]: 5.13 scrub ok
Oct 10 09:46:29 compute-1 ceph-mon[79167]: 6.2 scrub starts
Oct 10 09:46:29 compute-1 ceph-mon[79167]: 6.2 scrub ok
Oct 10 09:46:29 compute-1 ceph-mon[79167]: 6.11 scrub starts
Oct 10 09:46:29 compute-1 ceph-mon[79167]: 6.11 scrub ok
Oct 10 09:46:29 compute-1 ceph-mon[79167]: 6.1 scrub starts
Oct 10 09:46:29 compute-1 ceph-mon[79167]: 6.1 scrub ok
Oct 10 09:46:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:29.849+0000 7f2ea663d140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 09:46:29 compute-1 ceph-mgr[79476]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 09:46:29 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'osd_support'
Oct 10 09:46:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:29.913+0000 7f2ea663d140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 09:46:29 compute-1 ceph-mgr[79476]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 09:46:29 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'pg_autoscaler'
Oct 10 09:46:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:29.988+0000 7f2ea663d140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 09:46:29 compute-1 ceph-mgr[79476]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 09:46:29 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'progress'
Oct 10 09:46:30 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:30.060+0000 7f2ea663d140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 09:46:30 compute-1 ceph-mgr[79476]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 09:46:30 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'prometheus'
Oct 10 09:46:30 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.e scrub starts
Oct 10 09:46:30 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.e scrub ok
Oct 10 09:46:30 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:30.385+0000 7f2ea663d140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 09:46:30 compute-1 ceph-mgr[79476]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 09:46:30 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'rbd_support'
Oct 10 09:46:30 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:30.471+0000 7f2ea663d140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 09:46:30 compute-1 ceph-mgr[79476]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 09:46:30 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'restful'
Oct 10 09:46:30 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'rgw'
Oct 10 09:46:30 compute-1 ceph-mon[79167]: 6.d scrub starts
Oct 10 09:46:30 compute-1 ceph-mon[79167]: 6.d scrub ok
Oct 10 09:46:30 compute-1 ceph-mon[79167]: 6.10 scrub starts
Oct 10 09:46:30 compute-1 ceph-mon[79167]: 6.10 scrub ok
Oct 10 09:46:30 compute-1 ceph-mon[79167]: 3.8 scrub starts
Oct 10 09:46:30 compute-1 ceph-mon[79167]: 3.8 scrub ok
Oct 10 09:46:30 compute-1 systemd[1]: Stopping User Manager for UID 42477...
Oct 10 09:46:30 compute-1 systemd[71841]: Activating special unit Exit the Session...
Oct 10 09:46:30 compute-1 systemd[71841]: Stopped target Main User Target.
Oct 10 09:46:30 compute-1 systemd[71841]: Stopped target Basic System.
Oct 10 09:46:30 compute-1 systemd[71841]: Stopped target Paths.
Oct 10 09:46:30 compute-1 systemd[71841]: Stopped target Sockets.
Oct 10 09:46:30 compute-1 systemd[71841]: Stopped target Timers.
Oct 10 09:46:30 compute-1 systemd[71841]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 10 09:46:30 compute-1 systemd[71841]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 10 09:46:30 compute-1 systemd[71841]: Closed D-Bus User Message Bus Socket.
Oct 10 09:46:30 compute-1 systemd[71841]: Stopped Create User's Volatile Files and Directories.
Oct 10 09:46:30 compute-1 systemd[71841]: Removed slice User Application Slice.
Oct 10 09:46:30 compute-1 systemd[71841]: Reached target Shutdown.
Oct 10 09:46:30 compute-1 systemd[71841]: Finished Exit the Session.
Oct 10 09:46:30 compute-1 systemd[71841]: Reached target Exit the Session.
Oct 10 09:46:30 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:30.873+0000 7f2ea663d140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 09:46:30 compute-1 ceph-mgr[79476]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 09:46:30 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'rook'
Oct 10 09:46:30 compute-1 systemd[1]: user@42477.service: Deactivated successfully.
Oct 10 09:46:30 compute-1 systemd[1]: Stopped User Manager for UID 42477.
Oct 10 09:46:30 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Oct 10 09:46:30 compute-1 systemd[1]: run-user-42477.mount: Deactivated successfully.
Oct 10 09:46:30 compute-1 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Oct 10 09:46:30 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Oct 10 09:46:30 compute-1 systemd[1]: Removed slice User Slice of UID 42477.
Oct 10 09:46:30 compute-1 systemd[1]: user-42477.slice: Consumed 1min 20.821s CPU time.
Oct 10 09:46:31 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Oct 10 09:46:31 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Oct 10 09:46:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:31.440+0000 7f2ea663d140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 09:46:31 compute-1 ceph-mgr[79476]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 09:46:31 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'selftest'
Oct 10 09:46:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:31.553+0000 7f2ea663d140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 09:46:31 compute-1 ceph-mgr[79476]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 09:46:31 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'snap_schedule'
Oct 10 09:46:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:31.633+0000 7f2ea663d140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 09:46:31 compute-1 ceph-mgr[79476]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 09:46:31 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'stats'
Oct 10 09:46:31 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'status'
Oct 10 09:46:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:31.783+0000 7f2ea663d140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 09:46:31 compute-1 ceph-mgr[79476]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 09:46:31 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'telegraf'
Oct 10 09:46:31 compute-1 ceph-mon[79167]: 6.e scrub starts
Oct 10 09:46:31 compute-1 ceph-mon[79167]: 6.e scrub ok
Oct 10 09:46:31 compute-1 ceph-mon[79167]: 6.13 scrub starts
Oct 10 09:46:31 compute-1 ceph-mon[79167]: 6.13 scrub ok
Oct 10 09:46:31 compute-1 ceph-mon[79167]: 2.15 scrub starts
Oct 10 09:46:31 compute-1 ceph-mon[79167]: 2.15 scrub ok
Oct 10 09:46:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:31.856+0000 7f2ea663d140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 09:46:31 compute-1 ceph-mgr[79476]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 09:46:31 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'telemetry'
Oct 10 09:46:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:32.011+0000 7f2ea663d140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 09:46:32 compute-1 ceph-mgr[79476]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 09:46:32 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'test_orchestrator'
Oct 10 09:46:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:46:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:32.228+0000 7f2ea663d140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 09:46:32 compute-1 ceph-mgr[79476]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 09:46:32 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'volumes'
Oct 10 09:46:32 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.c scrub starts
Oct 10 09:46:32 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.c scrub ok
Oct 10 09:46:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:32.497+0000 7f2ea663d140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 09:46:32 compute-1 ceph-mgr[79476]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 09:46:32 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'zabbix'
Oct 10 09:46:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:46:32.563+0000 7f2ea663d140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 09:46:32 compute-1 ceph-mgr[79476]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 09:46:32 compute-1 ceph-mgr[79476]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:46:32 compute-1 ceph-mgr[79476]: mgr load Constructed class from module: dashboard
Oct 10 09:46:32 compute-1 ceph-mgr[79476]: [dashboard INFO root] server: ssl=no host=192.168.122.101 port=8443
Oct 10 09:46:32 compute-1 ceph-mgr[79476]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct 10 09:46:32 compute-1 ceph-mgr[79476]: [dashboard INFO root] Starting engine...
Oct 10 09:46:32 compute-1 ceph-mgr[79476]: ms_deliver_dispatch: unhandled message 0x562510b79860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Oct 10 09:46:32 compute-1 ceph-mgr[79476]: [dashboard INFO root] Engine started...
Oct 10 09:46:32 compute-1 ceph-mon[79167]: 5.1c scrub starts
Oct 10 09:46:32 compute-1 ceph-mon[79167]: 5.1c scrub ok
Oct 10 09:46:32 compute-1 ceph-mon[79167]: 5.1e scrub starts
Oct 10 09:46:32 compute-1 ceph-mon[79167]: 5.1e scrub ok
Oct 10 09:46:32 compute-1 ceph-mon[79167]: 2.12 deep-scrub starts
Oct 10 09:46:32 compute-1 ceph-mon[79167]: 2.12 deep-scrub ok
Oct 10 09:46:32 compute-1 ceph-mon[79167]: Standby manager daemon compute-1.rfugxc restarted
Oct 10 09:46:32 compute-1 ceph-mon[79167]: Standby manager daemon compute-1.rfugxc started
Oct 10 09:46:33 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Oct 10 09:46:33 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Oct 10 09:46:33 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e38 e38: 3 total, 3 up, 3 in
Oct 10 09:46:33 compute-1 ceph-mon[79167]: 4.c scrub starts
Oct 10 09:46:33 compute-1 ceph-mon[79167]: 4.c scrub ok
Oct 10 09:46:33 compute-1 ceph-mon[79167]: 6.1d scrub starts
Oct 10 09:46:33 compute-1 ceph-mon[79167]: 6.1d scrub ok
Oct 10 09:46:33 compute-1 ceph-mon[79167]: 2.13 scrub starts
Oct 10 09:46:33 compute-1 ceph-mon[79167]: 2.13 scrub ok
Oct 10 09:46:33 compute-1 ceph-mon[79167]: mgrmap e22: compute-0.xkdepb(active, starting, since 5s), standbys: compute-2.gkrssp, compute-1.rfugxc
Oct 10 09:46:33 compute-1 ceph-mon[79167]: Standby manager daemon compute-2.gkrssp restarted
Oct 10 09:46:33 compute-1 ceph-mon[79167]: Standby manager daemon compute-2.gkrssp started
Oct 10 09:46:33 compute-1 ceph-mon[79167]: Active manager daemon compute-0.xkdepb restarted
Oct 10 09:46:33 compute-1 ceph-mon[79167]: Activating manager daemon compute-0.xkdepb
Oct 10 09:46:33 compute-1 ceph-mon[79167]: osdmap e38: 3 total, 3 up, 3 in
Oct 10 09:46:33 compute-1 ceph-mon[79167]: mgrmap e23: compute-0.xkdepb(active, starting, since 0.0311202s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:33 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 10 09:46:33 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:46:33 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 10 09:46:33 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-0.xkdepb", "id": "compute-0.xkdepb"}]: dispatch
Oct 10 09:46:33 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-1.rfugxc", "id": "compute-1.rfugxc"}]: dispatch
Oct 10 09:46:33 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-2.gkrssp", "id": "compute-2.gkrssp"}]: dispatch
Oct 10 09:46:33 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:46:33 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:46:33 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:46:33 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 10 09:46:33 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 10 09:46:33 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 10 09:46:33 compute-1 ceph-mon[79167]: Manager daemon compute-0.xkdepb is now available
Oct 10 09:46:33 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/mirror_snapshot_schedule"}]: dispatch
Oct 10 09:46:33 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/trash_purge_schedule"}]: dispatch
Oct 10 09:46:33 compute-1 sshd-session[82222]: Accepted publickey for ceph-admin from 192.168.122.100 port 50142 ssh2: RSA SHA256:iFwOnwcB2x2Q1gpAWZobZa2jCZZy75CuUHv4ViVnHA0
Oct 10 09:46:33 compute-1 systemd[1]: Created slice User Slice of UID 42477.
Oct 10 09:46:33 compute-1 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 10 09:46:33 compute-1 systemd-logind[789]: New session 35 of user ceph-admin.
Oct 10 09:46:33 compute-1 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 10 09:46:34 compute-1 systemd[1]: Starting User Manager for UID 42477...
Oct 10 09:46:34 compute-1 systemd[82226]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:46:34 compute-1 systemd[82226]: Queued start job for default target Main User Target.
Oct 10 09:46:34 compute-1 systemd[82226]: Created slice User Application Slice.
Oct 10 09:46:34 compute-1 systemd[82226]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 10 09:46:34 compute-1 systemd[82226]: Started Daily Cleanup of User's Temporary Directories.
Oct 10 09:46:34 compute-1 systemd[82226]: Reached target Paths.
Oct 10 09:46:34 compute-1 systemd[82226]: Reached target Timers.
Oct 10 09:46:34 compute-1 systemd[82226]: Starting D-Bus User Message Bus Socket...
Oct 10 09:46:34 compute-1 systemd[82226]: Starting Create User's Volatile Files and Directories...
Oct 10 09:46:34 compute-1 systemd[82226]: Listening on D-Bus User Message Bus Socket.
Oct 10 09:46:34 compute-1 systemd[82226]: Reached target Sockets.
Oct 10 09:46:34 compute-1 systemd[82226]: Finished Create User's Volatile Files and Directories.
Oct 10 09:46:34 compute-1 systemd[82226]: Reached target Basic System.
Oct 10 09:46:34 compute-1 systemd[82226]: Reached target Main User Target.
Oct 10 09:46:34 compute-1 systemd[82226]: Startup finished in 158ms.
Oct 10 09:46:34 compute-1 systemd[1]: Started User Manager for UID 42477.
Oct 10 09:46:34 compute-1 systemd[1]: Started Session 35 of User ceph-admin.
Oct 10 09:46:34 compute-1 sshd-session[82222]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:46:34 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Oct 10 09:46:34 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Oct 10 09:46:34 compute-1 sudo[82242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:46:34 compute-1 sudo[82242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:34 compute-1 sudo[82242]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:34 compute-1 sudo[82267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 10 09:46:34 compute-1 sudo[82267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:34 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).mds e2 new map
Oct 10 09:46:34 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).mds e2 print_map
                                           e2
                                           btime 2025-10-10T09:46:34:511425+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-10T09:46:34.511367+0000
                                           modified        2025-10-10T09:46:34.511367+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Oct 10 09:46:34 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e39 e39: 3 total, 3 up, 3 in
Oct 10 09:46:34 compute-1 ceph-mon[79167]: 3.1c scrub starts
Oct 10 09:46:34 compute-1 ceph-mon[79167]: 3.1c scrub ok
Oct 10 09:46:34 compute-1 ceph-mon[79167]: 3.19 deep-scrub starts
Oct 10 09:46:34 compute-1 ceph-mon[79167]: 3.19 deep-scrub ok
Oct 10 09:46:34 compute-1 ceph-mon[79167]: 2.18 scrub starts
Oct 10 09:46:34 compute-1 ceph-mon[79167]: 2.18 scrub ok
Oct 10 09:46:34 compute-1 ceph-mon[79167]: mgrmap e24: compute-0.xkdepb(active, since 1.0568s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:34 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct 10 09:46:34 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct 10 09:46:34 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct 10 09:46:34 compute-1 ceph-mon[79167]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 10 09:46:34 compute-1 ceph-mon[79167]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct 10 09:46:34 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct 10 09:46:34 compute-1 ceph-mon[79167]: osdmap e39: 3 total, 3 up, 3 in
Oct 10 09:46:34 compute-1 ceph-mon[79167]: fsmap cephfs:0
Oct 10 09:46:34 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:35 compute-1 podman[82365]: 2025-10-10 09:46:35.195998557 +0000 UTC m=+0.092998555 container exec 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 10 09:46:35 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Oct 10 09:46:35 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Oct 10 09:46:35 compute-1 podman[82365]: 2025-10-10 09:46:35.320882895 +0000 UTC m=+0.217882843 container exec_died 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Oct 10 09:46:35 compute-1 sudo[82267]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:35 compute-1 sudo[82451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:46:35 compute-1 sudo[82451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:35 compute-1 sudo[82451]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:35 compute-1 ceph-mon[79167]: 4.1b scrub starts
Oct 10 09:46:35 compute-1 ceph-mon[79167]: 4.1b scrub ok
Oct 10 09:46:35 compute-1 ceph-mon[79167]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 10 09:46:35 compute-1 ceph-mon[79167]: 2.19 scrub starts
Oct 10 09:46:35 compute-1 ceph-mon[79167]: 2.19 scrub ok
Oct 10 09:46:35 compute-1 ceph-mon[79167]: [10/Oct/2025:09:46:34] ENGINE Bus STARTING
Oct 10 09:46:35 compute-1 ceph-mon[79167]: 2.10 scrub starts
Oct 10 09:46:35 compute-1 ceph-mon[79167]: 2.10 scrub ok
Oct 10 09:46:35 compute-1 ceph-mon[79167]: [10/Oct/2025:09:46:34] ENGINE Serving on http://192.168.122.100:8765
Oct 10 09:46:35 compute-1 ceph-mon[79167]: [10/Oct/2025:09:46:35] ENGINE Serving on https://192.168.122.100:7150
Oct 10 09:46:35 compute-1 ceph-mon[79167]: [10/Oct/2025:09:46:35] ENGINE Bus STARTED
Oct 10 09:46:35 compute-1 ceph-mon[79167]: [10/Oct/2025:09:46:35] ENGINE Client ('192.168.122.100', 60804) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 10 09:46:35 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:35 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:35 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:35 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:35 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:35 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:35 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:35 compute-1 sudo[82476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 09:46:35 compute-1 sudo[82476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:36 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Oct 10 09:46:36 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Oct 10 09:46:36 compute-1 sudo[82476]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:36 compute-1 sudo[82532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:46:36 compute-1 sudo[82532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:36 compute-1 sudo[82532]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:36 compute-1 sudo[82557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Oct 10 09:46:36 compute-1 sudo[82557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:36 compute-1 ceph-mon[79167]: 5.1b scrub starts
Oct 10 09:46:36 compute-1 ceph-mon[79167]: 5.1b scrub ok
Oct 10 09:46:36 compute-1 ceph-mon[79167]: pgmap v5: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:36 compute-1 ceph-mon[79167]: from='client.14469 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:46:36 compute-1 ceph-mon[79167]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 10 09:46:36 compute-1 ceph-mon[79167]: 2.6 scrub starts
Oct 10 09:46:36 compute-1 ceph-mon[79167]: 2.6 scrub ok
Oct 10 09:46:36 compute-1 ceph-mon[79167]: 2.f scrub starts
Oct 10 09:46:36 compute-1 ceph-mon[79167]: 2.f scrub ok
Oct 10 09:46:36 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:36 compute-1 ceph-mon[79167]: mgrmap e25: compute-0.xkdepb(active, since 2s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:36 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:36 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct 10 09:46:36 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Oct 10 09:46:36 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:36 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:36 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 10 09:46:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:46:37 compute-1 sudo[82557]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:37 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Oct 10 09:46:37 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Oct 10 09:46:37 compute-1 sudo[82598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 10 09:46:37 compute-1 sudo[82598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:37 compute-1 sudo[82598]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:37 compute-1 sudo[82623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph
Oct 10 09:46:37 compute-1 sudo[82623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:37 compute-1 sudo[82623]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:37 compute-1 sudo[82648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:46:37 compute-1 sudo[82648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:37 compute-1 sudo[82648]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e40 e40: 3 total, 3 up, 3 in
Oct 10 09:46:37 compute-1 sudo[82673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:46:37 compute-1 sudo[82673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:37 compute-1 sudo[82673]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:37 compute-1 sudo[82698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:46:37 compute-1 sudo[82698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:37 compute-1 sudo[82698]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:37 compute-1 sudo[82746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:46:37 compute-1 sudo[82746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:37 compute-1 sudo[82746]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:37 compute-1 ceph-mon[79167]: 4.18 scrub starts
Oct 10 09:46:37 compute-1 ceph-mon[79167]: 4.18 scrub ok
Oct 10 09:46:37 compute-1 ceph-mon[79167]: Adjusting osd_memory_target on compute-2 to 128.0M
Oct 10 09:46:37 compute-1 ceph-mon[79167]: Unable to set osd_memory_target on compute-2 to 134243532: error parsing value: Value '134243532' is below minimum 939524096
Oct 10 09:46:37 compute-1 ceph-mon[79167]: from='client.14481 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 09:46:37 compute-1 ceph-mon[79167]: 2.e scrub starts
Oct 10 09:46:37 compute-1 ceph-mon[79167]: 2.e scrub ok
Oct 10 09:46:37 compute-1 ceph-mon[79167]: Adjusting osd_memory_target on compute-0 to 128.0M
Oct 10 09:46:37 compute-1 ceph-mon[79167]: Unable to set osd_memory_target on compute-0 to 134240665: error parsing value: Value '134240665' is below minimum 939524096
Oct 10 09:46:37 compute-1 ceph-mon[79167]: 2.c scrub starts
Oct 10 09:46:37 compute-1 ceph-mon[79167]: 2.c scrub ok
Oct 10 09:46:37 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:37 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:37 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 10 09:46:37 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:37 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:46:37 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Oct 10 09:46:37 compute-1 ceph-mon[79167]: osdmap e40: 3 total, 3 up, 3 in
Oct 10 09:46:37 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Oct 10 09:46:37 compute-1 sudo[82771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:46:37 compute-1 sudo[82771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:37 compute-1 sudo[82771]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:38 compute-1 sudo[82796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 10 09:46:38 compute-1 sudo[82796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:38 compute-1 sudo[82796]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:38 compute-1 sudo[82821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:46:38 compute-1 sudo[82821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:38 compute-1 sudo[82821]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:38 compute-1 sudo[82846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:46:38 compute-1 sudo[82846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:38 compute-1 sudo[82846]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:38 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Oct 10 09:46:38 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Oct 10 09:46:38 compute-1 sudo[82871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:46:38 compute-1 sudo[82871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:38 compute-1 sudo[82871]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:38 compute-1 sudo[82896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:46:38 compute-1 sudo[82896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:38 compute-1 sudo[82896]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:38 compute-1 sudo[82921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:46:38 compute-1 sudo[82921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:38 compute-1 sudo[82921]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:38 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e41 e41: 3 total, 3 up, 3 in
Oct 10 09:46:38 compute-1 sudo[82969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:46:38 compute-1 sudo[82969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:38 compute-1 sudo[82969]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:38 compute-1 sudo[82994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:46:38 compute-1 sudo[82994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:38 compute-1 sudo[82994]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:38 compute-1 sudo[83019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:46:38 compute-1 sudo[83019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:38 compute-1 sudo[83019]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:38 compute-1 sudo[83044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 10 09:46:38 compute-1 sudo[83044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:38 compute-1 sudo[83044]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:38 compute-1 ceph-mon[79167]: Adjusting osd_memory_target on compute-1 to 128.0M
Oct 10 09:46:38 compute-1 ceph-mon[79167]: Unable to set osd_memory_target on compute-1 to 134243532: error parsing value: Value '134243532' is below minimum 939524096
Oct 10 09:46:38 compute-1 ceph-mon[79167]: Updating compute-0:/etc/ceph/ceph.conf
Oct 10 09:46:38 compute-1 ceph-mon[79167]: Updating compute-1:/etc/ceph/ceph.conf
Oct 10 09:46:38 compute-1 ceph-mon[79167]: Updating compute-2:/etc/ceph/ceph.conf
Oct 10 09:46:38 compute-1 ceph-mon[79167]: 4.1a scrub starts
Oct 10 09:46:38 compute-1 ceph-mon[79167]: 4.1a scrub ok
Oct 10 09:46:38 compute-1 ceph-mon[79167]: pgmap v6: 162 pgs: 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:38 compute-1 ceph-mon[79167]: Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:46:38 compute-1 ceph-mon[79167]: 2.d scrub starts
Oct 10 09:46:38 compute-1 ceph-mon[79167]: 2.d scrub ok
Oct 10 09:46:38 compute-1 ceph-mon[79167]: Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:46:38 compute-1 sudo[83069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph
Oct 10 09:46:38 compute-1 ceph-mon[79167]: Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:46:38 compute-1 ceph-mon[79167]: mgrmap e26: compute-0.xkdepb(active, since 4s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:38 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Oct 10 09:46:38 compute-1 ceph-mon[79167]: osdmap e41: 3 total, 3 up, 3 in
Oct 10 09:46:38 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:38 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:38 compute-1 sudo[83069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:38 compute-1 sudo[83069]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:39 compute-1 sudo[83094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new
Oct 10 09:46:39 compute-1 sudo[83094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:39 compute-1 sudo[83094]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:39 compute-1 sudo[83119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:46:39 compute-1 sudo[83119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:39 compute-1 sudo[83119]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:39 compute-1 sudo[83144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new
Oct 10 09:46:39 compute-1 sudo[83144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:39 compute-1 sudo[83144]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:39 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.18 deep-scrub starts
Oct 10 09:46:39 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 5.18 deep-scrub ok
Oct 10 09:46:39 compute-1 sudo[83192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new
Oct 10 09:46:39 compute-1 sudo[83192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:39 compute-1 sudo[83192]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:39 compute-1 sudo[83217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new
Oct 10 09:46:39 compute-1 sudo[83217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:39 compute-1 sudo[83217]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:39 compute-1 sudo[83242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Oct 10 09:46:39 compute-1 sudo[83242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:39 compute-1 sudo[83242]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:39 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e42 e42: 3 total, 3 up, 3 in
Oct 10 09:46:39 compute-1 sudo[83267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:46:39 compute-1 sudo[83267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:39 compute-1 sudo[83267]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:39 compute-1 sudo[83292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:46:39 compute-1 sudo[83292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:39 compute-1 sudo[83292]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:39 compute-1 sudo[83317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new
Oct 10 09:46:39 compute-1 sudo[83317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:39 compute-1 sudo[83317]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:39 compute-1 sudo[83342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:46:39 compute-1 sudo[83342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:39 compute-1 sudo[83342]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:39 compute-1 ceph-mon[79167]: 4.5 scrub starts
Oct 10 09:46:39 compute-1 ceph-mon[79167]: 4.5 scrub ok
Oct 10 09:46:39 compute-1 ceph-mon[79167]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:46:39 compute-1 ceph-mon[79167]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct 10 09:46:39 compute-1 ceph-mon[79167]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct 10 09:46:39 compute-1 ceph-mon[79167]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:46:39 compute-1 ceph-mon[79167]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:46:39 compute-1 ceph-mon[79167]: 2.5 scrub starts
Oct 10 09:46:39 compute-1 ceph-mon[79167]: 2.5 scrub ok
Oct 10 09:46:39 compute-1 ceph-mon[79167]: Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:46:39 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:39 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:39 compute-1 ceph-mon[79167]: osdmap e42: 3 total, 3 up, 3 in
Oct 10 09:46:39 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:39 compute-1 sudo[83367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new
Oct 10 09:46:39 compute-1 sudo[83367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:39 compute-1 sudo[83367]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:40 compute-1 sudo[83415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new
Oct 10 09:46:40 compute-1 sudo[83415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:40 compute-1 sudo[83415]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:40 compute-1 sudo[83440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new
Oct 10 09:46:40 compute-1 sudo[83440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:40 compute-1 sudo[83440]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:40 compute-1 sudo[83465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:46:40 compute-1 sudo[83465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:40 compute-1 sudo[83465]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:40 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Oct 10 09:46:40 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Oct 10 09:46:40 compute-1 ceph-mon[79167]: 5.18 deep-scrub starts
Oct 10 09:46:40 compute-1 ceph-mon[79167]: 5.18 deep-scrub ok
Oct 10 09:46:40 compute-1 ceph-mon[79167]: Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:46:40 compute-1 ceph-mon[79167]: pgmap v9: 163 pgs: 1 unknown, 162 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:40 compute-1 ceph-mon[79167]: Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:46:40 compute-1 ceph-mon[79167]: 2.b scrub starts
Oct 10 09:46:40 compute-1 ceph-mon[79167]: 2.b scrub ok
Oct 10 09:46:40 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:40 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:40 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:40 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:40 compute-1 ceph-mon[79167]: mgrmap e27: compute-0.xkdepb(active, since 7s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:46:40 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/200213662' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct 10 09:46:40 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/200213662' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 10 09:46:41 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Oct 10 09:46:41 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Oct 10 09:46:41 compute-1 ceph-mon[79167]: 6.19 scrub starts
Oct 10 09:46:41 compute-1 ceph-mon[79167]: 6.19 scrub ok
Oct 10 09:46:41 compute-1 ceph-mon[79167]: Deploying daemon node-exporter.compute-0 on compute-0
Oct 10 09:46:41 compute-1 ceph-mon[79167]: 2.1b scrub starts
Oct 10 09:46:41 compute-1 ceph-mon[79167]: 2.1b scrub ok
Oct 10 09:46:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:46:42 compute-1 ceph-mon[79167]: 6.1a scrub starts
Oct 10 09:46:42 compute-1 ceph-mon[79167]: 6.1a scrub ok
Oct 10 09:46:42 compute-1 ceph-mon[79167]: pgmap v11: 163 pgs: 163 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Oct 10 09:46:42 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1404388837' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 10 09:46:42 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:42 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:42 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:43 compute-1 sudo[83490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:46:43 compute-1 sudo[83490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:43 compute-1 sudo[83490]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:43 compute-1 sudo[83515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:46:43 compute-1 sudo[83515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:43 compute-1 systemd[1]: Reloading.
Oct 10 09:46:43 compute-1 systemd-rc-local-generator[83609]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:46:43 compute-1 systemd-sysv-generator[83612]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:46:43 compute-1 systemd[1]: Reloading.
Oct 10 09:46:43 compute-1 ceph-mon[79167]: Deploying daemon node-exporter.compute-1 on compute-1
Oct 10 09:46:43 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/4210446203' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 09:46:44 compute-1 systemd-rc-local-generator[83640]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:46:44 compute-1 systemd-sysv-generator[83643]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:46:44 compute-1 systemd[1]: Starting Ceph node-exporter.compute-1 for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:46:44 compute-1 bash[83706]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Oct 10 09:46:44 compute-1 bash[83706]: Getting image source signatures
Oct 10 09:46:44 compute-1 bash[83706]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Oct 10 09:46:44 compute-1 bash[83706]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Oct 10 09:46:44 compute-1 bash[83706]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Oct 10 09:46:45 compute-1 ceph-mon[79167]: pgmap v12: 163 pgs: 163 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Oct 10 09:46:45 compute-1 bash[83706]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Oct 10 09:46:45 compute-1 bash[83706]: Writing manifest to image destination
Oct 10 09:46:45 compute-1 podman[83706]: 2025-10-10 09:46:45.645737804 +0000 UTC m=+1.219258815 container create db44b00524aaf85460d5de4878ad14e99cea939212a287f5217a7de1b31f7321 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:46:45 compute-1 podman[83706]: 2025-10-10 09:46:45.632815083 +0000 UTC m=+1.206336124 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Oct 10 09:46:45 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e94d6beb7318b535b36ba4007be4f72a83f864adf9419ced3dd0ad671753a888/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:45 compute-1 podman[83706]: 2025-10-10 09:46:45.71121401 +0000 UTC m=+1.284735031 container init db44b00524aaf85460d5de4878ad14e99cea939212a287f5217a7de1b31f7321 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:46:45 compute-1 podman[83706]: 2025-10-10 09:46:45.719621498 +0000 UTC m=+1.293142509 container start db44b00524aaf85460d5de4878ad14e99cea939212a287f5217a7de1b31f7321 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:46:45 compute-1 bash[83706]: db44b00524aaf85460d5de4878ad14e99cea939212a287f5217a7de1b31f7321
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.731Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.732Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.733Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.734Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.734Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.734Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Oct 10 09:46:45 compute-1 systemd[1]: Started Ceph node-exporter.compute-1 for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=arp
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=bcache
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=bonding
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=btrfs
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=conntrack
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=cpu
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=cpufreq
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=diskstats
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=dmi
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=edac
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=entropy
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=fibrechannel
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=filefd
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=filesystem
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=hwmon
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=infiniband
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=ipvs
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=loadavg
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=mdadm
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=meminfo
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=netclass
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=netdev
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=netstat
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=nfs
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=nfsd
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=nvme
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=os
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.736Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.737Z caller=node_exporter.go:117 level=info collector=pressure
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.737Z caller=node_exporter.go:117 level=info collector=rapl
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.737Z caller=node_exporter.go:117 level=info collector=schedstat
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.737Z caller=node_exporter.go:117 level=info collector=selinux
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.737Z caller=node_exporter.go:117 level=info collector=sockstat
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.737Z caller=node_exporter.go:117 level=info collector=softnet
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.737Z caller=node_exporter.go:117 level=info collector=stat
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.737Z caller=node_exporter.go:117 level=info collector=tapestats
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.737Z caller=node_exporter.go:117 level=info collector=textfile
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.737Z caller=node_exporter.go:117 level=info collector=thermal_zone
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.737Z caller=node_exporter.go:117 level=info collector=time
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.737Z caller=node_exporter.go:117 level=info collector=udp_queues
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.737Z caller=node_exporter.go:117 level=info collector=uname
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.737Z caller=node_exporter.go:117 level=info collector=vmstat
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.737Z caller=node_exporter.go:117 level=info collector=xfs
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.737Z caller=node_exporter.go:117 level=info collector=zfs
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.740Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Oct 10 09:46:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1[83781]: ts=2025-10-10T09:46:45.740Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Oct 10 09:46:45 compute-1 sudo[83515]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:46 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1088819812' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct 10 09:46:46 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:46 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:46 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:47 compute-1 ceph-mon[79167]: pgmap v13: 163 pgs: 163 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s
Oct 10 09:46:47 compute-1 ceph-mon[79167]: Deploying daemon node-exporter.compute-2 on compute-2
Oct 10 09:46:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:46:49 compute-1 ceph-mon[79167]: from='client.14517 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 10 09:46:49 compute-1 ceph-mon[79167]: pgmap v14: 163 pgs: 163 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s
Oct 10 09:46:49 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:49 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:49 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:49 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:49 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:46:49 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:46:49 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:49 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:50 compute-1 ceph-mon[79167]: pgmap v15: 163 pgs: 163 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s
Oct 10 09:46:50 compute-1 ceph-mon[79167]: from='client.14523 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 10 09:46:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:46:52 compute-1 ceph-mon[79167]: from='client.14529 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 10 09:46:52 compute-1 ceph-mon[79167]: pgmap v16: 163 pgs: 163 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:54 compute-1 ceph-mon[79167]: from='client.14535 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 10 09:46:54 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:54 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:54 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.qujzwn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 10 09:46:54 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.qujzwn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 10 09:46:54 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:54 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:54 compute-1 sudo[83790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:46:54 compute-1 sudo[83790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:54 compute-1 sudo[83790]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:54 compute-1 sudo[83815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:46:54 compute-1 sudo[83815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:46:55 compute-1 ceph-mon[79167]: Deploying daemon rgw.rgw.compute-2.qujzwn on compute-2
Oct 10 09:46:55 compute-1 ceph-mon[79167]: pgmap v17: 163 pgs: 163 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:55 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/117532342' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 10 09:46:55 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:55 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:55 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:55 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.zajetc", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 10 09:46:55 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.zajetc", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 10 09:46:55 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:55 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:55 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e43 e43: 3 total, 3 up, 3 in
Oct 10 09:46:55 compute-1 podman[83883]: 2025-10-10 09:46:55.353176488 +0000 UTC m=+0.068297701 container create 0baaccc9608734db107f8183b8bd68842eacbf8ce42b95a7a9d5c21831d834b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_curran, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 10 09:46:55 compute-1 systemd[1]: Started libpod-conmon-0baaccc9608734db107f8183b8bd68842eacbf8ce42b95a7a9d5c21831d834b1.scope.
Oct 10 09:46:55 compute-1 podman[83883]: 2025-10-10 09:46:55.322186614 +0000 UTC m=+0.037307827 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:46:55 compute-1 systemd[1]: Started libcrun container.
Oct 10 09:46:55 compute-1 podman[83883]: 2025-10-10 09:46:55.457783255 +0000 UTC m=+0.172904498 container init 0baaccc9608734db107f8183b8bd68842eacbf8ce42b95a7a9d5c21831d834b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_curran, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 10 09:46:55 compute-1 podman[83883]: 2025-10-10 09:46:55.471870069 +0000 UTC m=+0.186991262 container start 0baaccc9608734db107f8183b8bd68842eacbf8ce42b95a7a9d5c21831d834b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_curran, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:46:55 compute-1 podman[83883]: 2025-10-10 09:46:55.476249859 +0000 UTC m=+0.191371202 container attach 0baaccc9608734db107f8183b8bd68842eacbf8ce42b95a7a9d5c21831d834b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_curran, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 10 09:46:55 compute-1 sad_curran[83899]: 167 167
Oct 10 09:46:55 compute-1 systemd[1]: libpod-0baaccc9608734db107f8183b8bd68842eacbf8ce42b95a7a9d5c21831d834b1.scope: Deactivated successfully.
Oct 10 09:46:55 compute-1 conmon[83899]: conmon 0baaccc9608734db107f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0baaccc9608734db107f8183b8bd68842eacbf8ce42b95a7a9d5c21831d834b1.scope/container/memory.events
Oct 10 09:46:55 compute-1 podman[83883]: 2025-10-10 09:46:55.481692677 +0000 UTC m=+0.196813870 container died 0baaccc9608734db107f8183b8bd68842eacbf8ce42b95a7a9d5c21831d834b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_curran, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:46:55 compute-1 systemd[1]: var-lib-containers-storage-overlay-7af365daf727c63a5dbb1d6f5e60569a23f86b8bd001baa3a66e7129d3d944a2-merged.mount: Deactivated successfully.
Oct 10 09:46:55 compute-1 podman[83883]: 2025-10-10 09:46:55.530578527 +0000 UTC m=+0.245699730 container remove 0baaccc9608734db107f8183b8bd68842eacbf8ce42b95a7a9d5c21831d834b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_curran, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:46:55 compute-1 systemd[1]: libpod-conmon-0baaccc9608734db107f8183b8bd68842eacbf8ce42b95a7a9d5c21831d834b1.scope: Deactivated successfully.
Oct 10 09:46:55 compute-1 systemd[1]: Reloading.
Oct 10 09:46:55 compute-1 systemd-rc-local-generator[83940]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:46:55 compute-1 systemd-sysv-generator[83944]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:46:55 compute-1 systemd[1]: Reloading.
Oct 10 09:46:56 compute-1 systemd-sysv-generator[83987]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:46:56 compute-1 systemd-rc-local-generator[83982]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:46:56 compute-1 ceph-mon[79167]: Deploying daemon rgw.rgw.compute-1.zajetc on compute-1
Oct 10 09:46:56 compute-1 ceph-mon[79167]: osdmap e43: 3 total, 3 up, 3 in
Oct 10 09:46:56 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2866042771' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 10 09:46:56 compute-1 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 10 09:46:56 compute-1 ceph-mon[79167]: pgmap v19: 164 pgs: 1 unknown, 163 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:56 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e44 e44: 3 total, 3 up, 3 in
Oct 10 09:46:56 compute-1 systemd[1]: Starting Ceph rgw.rgw.compute-1.zajetc for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:46:56 compute-1 podman[84044]: 2025-10-10 09:46:56.450426453 +0000 UTC m=+0.051951986 container create f0088935d6b485e22fe086b6885d0211eb99a9c88590188ac4b5da7d1a9ed8c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-rgw-rgw-compute-1-zajetc, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:46:56 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8886dfe0a98776c3795af87f9925497e986d8f9ee515a26c351d6b93a505fed4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:56 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8886dfe0a98776c3795af87f9925497e986d8f9ee515a26c351d6b93a505fed4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:56 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8886dfe0a98776c3795af87f9925497e986d8f9ee515a26c351d6b93a505fed4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:56 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8886dfe0a98776c3795af87f9925497e986d8f9ee515a26c351d6b93a505fed4/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-1.zajetc supports timestamps until 2038 (0x7fffffff)
Oct 10 09:46:56 compute-1 podman[84044]: 2025-10-10 09:46:56.424211818 +0000 UTC m=+0.025737421 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:46:56 compute-1 podman[84044]: 2025-10-10 09:46:56.526917685 +0000 UTC m=+0.128443258 container init f0088935d6b485e22fe086b6885d0211eb99a9c88590188ac4b5da7d1a9ed8c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-rgw-rgw-compute-1-zajetc, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct 10 09:46:56 compute-1 podman[84044]: 2025-10-10 09:46:56.542591902 +0000 UTC m=+0.144117475 container start f0088935d6b485e22fe086b6885d0211eb99a9c88590188ac4b5da7d1a9ed8c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-rgw-rgw-compute-1-zajetc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:46:56 compute-1 bash[84044]: f0088935d6b485e22fe086b6885d0211eb99a9c88590188ac4b5da7d1a9ed8c3
Oct 10 09:46:56 compute-1 systemd[1]: Started Ceph rgw.rgw.compute-1.zajetc for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:46:56 compute-1 radosgw[84063]: deferred set uid:gid to 167:167 (ceph:ceph)
Oct 10 09:46:56 compute-1 radosgw[84063]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Oct 10 09:46:56 compute-1 radosgw[84063]: framework: beast
Oct 10 09:46:56 compute-1 radosgw[84063]: framework conf key: endpoint, val: 192.168.122.101:8082
Oct 10 09:46:56 compute-1 radosgw[84063]: init_numa not setting numa affinity
Oct 10 09:46:56 compute-1 sudo[83815]: pam_unix(sudo:session): session closed for user root
Oct 10 09:46:57 compute-1 ceph-mon[79167]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 10 09:46:57 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/4039652738' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 10 09:46:57 compute-1 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct 10 09:46:57 compute-1 ceph-mon[79167]: osdmap e44: 3 total, 3 up, 3 in
Oct 10 09:46:57 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:57 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:57 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:57 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.myiozw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 10 09:46:57 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.myiozw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 10 09:46:57 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:57 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:57 compute-1 ceph-mon[79167]: Deploying daemon rgw.rgw.compute-0.myiozw on compute-0
Oct 10 09:46:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:46:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e45 e45: 3 total, 3 up, 3 in
Oct 10 09:46:57 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 45 pg[10.0( empty local-lis/les=0/0 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:46:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Oct 10 09:46:57 compute-1 ceph-mon[79167]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/877657232' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 10 09:46:58 compute-1 ceph-mon[79167]: osdmap e45: 3 total, 3 up, 3 in
Oct 10 09:46:58 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2659714554' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 10 09:46:58 compute-1 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 10 09:46:58 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/877657232' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 10 09:46:58 compute-1 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 10 09:46:58 compute-1 ceph-mon[79167]: pgmap v22: 165 pgs: 2 unknown, 163 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:46:58 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/897202661' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct 10 09:46:58 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e46 e46: 3 total, 3 up, 3 in
Oct 10 09:46:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 46 pg[10.0( empty local-lis/les=45/46 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [1] r=0 lpr=45 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:46:59 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e47 e47: 3 total, 3 up, 3 in
Oct 10 09:46:59 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Oct 10 09:46:59 compute-1 ceph-mon[79167]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/877657232' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 10 09:46:59 compute-1 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 10 09:46:59 compute-1 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-1.zajetc' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 10 09:46:59 compute-1 ceph-mon[79167]: osdmap e46: 3 total, 3 up, 3 in
Oct 10 09:46:59 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:59 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:59 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:59 compute-1 ceph-mon[79167]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 10 09:46:59 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:59 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:46:59 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.vlgajy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 10 09:46:59 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.vlgajy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 10 09:46:59 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:46:59 compute-1 ceph-mon[79167]: Deploying daemon mds.cephfs.compute-2.vlgajy on compute-2
Oct 10 09:47:00 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e48 e48: 3 total, 3 up, 3 in
Oct 10 09:47:00 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).mds e3 new map
Oct 10 09:47:00 compute-1 ceph-mon[79167]: osdmap e47: 3 total, 3 up, 3 in
Oct 10 09:47:00 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).mds e3 print_map
                                           e3
                                           btime 2025-10-10T09:47:00:211513+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-10T09:46:34.511367+0000
                                           modified        2025-10-10T09:46:34.511367+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-2.vlgajy{-1:24337} state up:standby seq 1 addr [v2:192.168.122.102:6804/1744510053,v1:192.168.122.102:6805/1744510053] compat {c=[1],r=[1],i=[1fff]}]
Oct 10 09:47:00 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2659714554' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 10 09:47:00 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/877657232' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 10 09:47:00 compute-1 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 10 09:47:00 compute-1 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 10 09:47:00 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2946038047' entity='client.rgw.rgw.compute-0.myiozw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 10 09:47:00 compute-1 ceph-mon[79167]: pgmap v25: 166 pgs: 3 unknown, 163 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:47:00 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/4100066023' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct 10 09:47:00 compute-1 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 10 09:47:00 compute-1 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-1.zajetc' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 10 09:47:00 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2946038047' entity='client.rgw.rgw.compute-0.myiozw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 10 09:47:00 compute-1 ceph-mon[79167]: osdmap e48: 3 total, 3 up, 3 in
Oct 10 09:47:00 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).mds e4 new map
Oct 10 09:47:00 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).mds e4 print_map
                                           e4
                                           btime 2025-10-10T09:47:00:244509+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-10T09:46:34.511367+0000
                                           modified        2025-10-10T09:47:00.244232+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24337}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-2.vlgajy{0:24337} state up:creating seq 1 addr [v2:192.168.122.102:6804/1744510053,v1:192.168.122.102:6805/1744510053] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Oct 10 09:47:01 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e49 e49: 3 total, 3 up, 3 in
Oct 10 09:47:01 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Oct 10 09:47:01 compute-1 ceph-mon[79167]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/877657232' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 10 09:47:01 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:01 compute-1 ceph-mon[79167]: mds.? [v2:192.168.122.102:6804/1744510053,v1:192.168.122.102:6805/1744510053] up:boot
Oct 10 09:47:01 compute-1 ceph-mon[79167]: daemon mds.cephfs.compute-2.vlgajy assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct 10 09:47:01 compute-1 ceph-mon[79167]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct 10 09:47:01 compute-1 ceph-mon[79167]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct 10 09:47:01 compute-1 ceph-mon[79167]: fsmap cephfs:0 1 up:standby
Oct 10 09:47:01 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.vlgajy"}]: dispatch
Oct 10 09:47:01 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:01 compute-1 ceph-mon[79167]: fsmap cephfs:1 {0=cephfs.compute-2.vlgajy=up:creating}
Oct 10 09:47:01 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:01 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.cchwlo", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 10 09:47:01 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.cchwlo", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 10 09:47:01 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:47:01 compute-1 ceph-mon[79167]: daemon mds.cephfs.compute-2.vlgajy is now active in filesystem cephfs as rank 0
Oct 10 09:47:01 compute-1 ceph-mon[79167]: Deploying daemon mds.cephfs.compute-0.cchwlo on compute-0
Oct 10 09:47:01 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).mds e5 new map
Oct 10 09:47:01 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).mds e5 print_map
                                           e5
                                           btime 2025-10-10T09:47:01:287113+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-10T09:46:34.511367+0000
                                           modified        2025-10-10T09:47:01.287110+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24337}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 24337 members: 24337
                                           [mds.cephfs.compute-2.vlgajy{0:24337} state up:active seq 2 addr [v2:192.168.122.102:6804/1744510053,v1:192.168.122.102:6805/1744510053] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Oct 10 09:47:01 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 49 pg[12.0( empty local-lis/les=0/0 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:47:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e50 e50: 3 total, 3 up, 3 in
Oct 10 09:47:02 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 50 pg[12.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=0/0 les/c/f=0/0/0 sis=49) [1] r=0 lpr=49 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:02 compute-1 ceph-mon[79167]: osdmap e49: 3 total, 3 up, 3 in
Oct 10 09:47:02 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2659714554' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 10 09:47:02 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/877657232' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 10 09:47:02 compute-1 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 10 09:47:02 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2946038047' entity='client.rgw.rgw.compute-0.myiozw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 10 09:47:02 compute-1 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 10 09:47:02 compute-1 ceph-mon[79167]: mds.? [v2:192.168.122.102:6804/1744510053,v1:192.168.122.102:6805/1744510053] up:active
Oct 10 09:47:02 compute-1 ceph-mon[79167]: fsmap cephfs:1 {0=cephfs.compute-2.vlgajy=up:active}
Oct 10 09:47:02 compute-1 ceph-mon[79167]: pgmap v28: 167 pgs: 1 unknown, 1 creating+peering, 165 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 1.5 KiB/s wr, 8 op/s
Oct 10 09:47:02 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:02 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:02 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:02 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.fhagzt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 10 09:47:02 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.fhagzt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 10 09:47:02 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:47:02 compute-1 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 10 09:47:02 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2946038047' entity='client.rgw.rgw.compute-0.myiozw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 10 09:47:02 compute-1 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-1.zajetc' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 10 09:47:02 compute-1 ceph-mon[79167]: osdmap e50: 3 total, 3 up, 3 in
Oct 10 09:47:02 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2946038047' entity='client.rgw.rgw.compute-0.myiozw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 10 09:47:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Oct 10 09:47:02 compute-1 ceph-mon[79167]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/877657232' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 10 09:47:02 compute-1 sudo[84651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:47:02 compute-1 sudo[84651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:47:02 compute-1 sudo[84651]: pam_unix(sudo:session): session closed for user root
Oct 10 09:47:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).mds e6 new map
Oct 10 09:47:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).mds e6 print_map
                                           e6
                                           btime 2025-10-10T09:47:02:297566+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-10T09:46:34.511367+0000
                                           modified        2025-10-10T09:47:01.287110+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24337}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 24337 members: 24337
                                           [mds.cephfs.compute-2.vlgajy{0:24337} state up:active seq 2 addr [v2:192.168.122.102:6804/1744510053,v1:192.168.122.102:6805/1744510053] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.cchwlo{-1:14592} state up:standby seq 1 addr [v2:192.168.122.100:6806/327772839,v1:192.168.122.100:6807/327772839] compat {c=[1],r=[1],i=[1fff]}]
Oct 10 09:47:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).mds e7 new map
Oct 10 09:47:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).mds e7 print_map
                                           e7
                                           btime 2025-10-10T09:47:02:322797+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-10T09:46:34.511367+0000
                                           modified        2025-10-10T09:47:01.287110+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24337}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24337 members: 24337
                                           [mds.cephfs.compute-2.vlgajy{0:24337} state up:active seq 2 addr [v2:192.168.122.102:6804/1744510053,v1:192.168.122.102:6805/1744510053] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.cchwlo{-1:14592} state up:standby seq 1 addr [v2:192.168.122.100:6806/327772839,v1:192.168.122.100:6807/327772839] compat {c=[1],r=[1],i=[1fff]}]
Oct 10 09:47:02 compute-1 sudo[84676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:47:02 compute-1 sudo[84676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:47:02 compute-1 podman[84742]: 2025-10-10 09:47:02.905008171 +0000 UTC m=+0.062318969 container create 09a76a8172504f33ec7e6535337062c94a54ef7590e0475ffeb8678edac44519 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_liskov, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 10 09:47:02 compute-1 systemd[1]: Started libpod-conmon-09a76a8172504f33ec7e6535337062c94a54ef7590e0475ffeb8678edac44519.scope.
Oct 10 09:47:02 compute-1 podman[84742]: 2025-10-10 09:47:02.881900111 +0000 UTC m=+0.039210939 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:47:02 compute-1 systemd[1]: Started libcrun container.
Oct 10 09:47:03 compute-1 podman[84742]: 2025-10-10 09:47:03.016104615 +0000 UTC m=+0.173415473 container init 09a76a8172504f33ec7e6535337062c94a54ef7590e0475ffeb8678edac44519 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_liskov, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 09:47:03 compute-1 podman[84742]: 2025-10-10 09:47:03.028524863 +0000 UTC m=+0.185835691 container start 09a76a8172504f33ec7e6535337062c94a54ef7590e0475ffeb8678edac44519 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:47:03 compute-1 podman[84742]: 2025-10-10 09:47:03.032357497 +0000 UTC m=+0.189668325 container attach 09a76a8172504f33ec7e6535337062c94a54ef7590e0475ffeb8678edac44519 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:47:03 compute-1 busy_liskov[84758]: 167 167
Oct 10 09:47:03 compute-1 systemd[1]: libpod-09a76a8172504f33ec7e6535337062c94a54ef7590e0475ffeb8678edac44519.scope: Deactivated successfully.
Oct 10 09:47:03 compute-1 podman[84742]: 2025-10-10 09:47:03.037512988 +0000 UTC m=+0.194823776 container died 09a76a8172504f33ec7e6535337062c94a54ef7590e0475ffeb8678edac44519 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_liskov, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:47:03 compute-1 systemd[1]: var-lib-containers-storage-overlay-26b67ae4d41d8b149e8dcfe5d415a35dff9e2ea8b73b90a48bc57d54e61dc0e3-merged.mount: Deactivated successfully.
Oct 10 09:47:03 compute-1 podman[84742]: 2025-10-10 09:47:03.091550589 +0000 UTC m=+0.248861387 container remove 09a76a8172504f33ec7e6535337062c94a54ef7590e0475ffeb8678edac44519 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Oct 10 09:47:03 compute-1 systemd[1]: libpod-conmon-09a76a8172504f33ec7e6535337062c94a54ef7590e0475ffeb8678edac44519.scope: Deactivated successfully.
Oct 10 09:47:03 compute-1 systemd[1]: Reloading.
Oct 10 09:47:03 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e51 e51: 3 total, 3 up, 3 in
Oct 10 09:47:03 compute-1 ceph-mon[79167]: Deploying daemon mds.cephfs.compute-1.fhagzt on compute-1
Oct 10 09:47:03 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2659714554' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 10 09:47:03 compute-1 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 10 09:47:03 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/877657232' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 10 09:47:03 compute-1 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-1.zajetc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 10 09:47:03 compute-1 ceph-mon[79167]: mds.? [v2:192.168.122.100:6806/327772839,v1:192.168.122.100:6807/327772839] up:boot
Oct 10 09:47:03 compute-1 ceph-mon[79167]: fsmap cephfs:1 {0=cephfs.compute-2.vlgajy=up:active} 1 up:standby
Oct 10 09:47:03 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.cchwlo"}]: dispatch
Oct 10 09:47:03 compute-1 ceph-mon[79167]: fsmap cephfs:1 {0=cephfs.compute-2.vlgajy=up:active} 1 up:standby
Oct 10 09:47:03 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2946038047' entity='client.rgw.rgw.compute-0.myiozw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 10 09:47:03 compute-1 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-2.qujzwn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 10 09:47:03 compute-1 ceph-mon[79167]: from='client.? ' entity='client.rgw.rgw.compute-1.zajetc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 10 09:47:03 compute-1 ceph-mon[79167]: osdmap e51: 3 total, 3 up, 3 in
Oct 10 09:47:03 compute-1 systemd-sysv-generator[84806]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:47:03 compute-1 systemd-rc-local-generator[84801]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:47:03 compute-1 systemd[1]: Reloading.
Oct 10 09:47:03 compute-1 radosgw[84063]: v1 topic migration: starting v1 topic migration..
Oct 10 09:47:03 compute-1 radosgw[84063]: LDAP not started since no server URIs were provided in the configuration.
Oct 10 09:47:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-rgw-rgw-compute-1-zajetc[84059]: 2025-10-10T09:47:03.491+0000 7f55a02c7980 -1 LDAP not started since no server URIs were provided in the configuration.
Oct 10 09:47:03 compute-1 radosgw[84063]: v1 topic migration: finished v1 topic migration
Oct 10 09:47:03 compute-1 radosgw[84063]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Oct 10 09:47:03 compute-1 radosgw[84063]: framework: beast
Oct 10 09:47:03 compute-1 radosgw[84063]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Oct 10 09:47:03 compute-1 radosgw[84063]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Oct 10 09:47:03 compute-1 radosgw[84063]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Oct 10 09:47:03 compute-1 radosgw[84063]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Oct 10 09:47:03 compute-1 radosgw[84063]: starting handler: beast
Oct 10 09:47:03 compute-1 systemd-rc-local-generator[84876]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:47:03 compute-1 systemd-sysv-generator[84879]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:47:03 compute-1 radosgw[84063]: set uid:gid to 167:167 (ceph:ceph)
Oct 10 09:47:03 compute-1 radosgw[84063]: mgrc service_daemon_register rgw.24185 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-1,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.101:8082,frontend_type#0=beast,hostname=compute-1,id=rgw.compute-1.zajetc,kernel_description=#1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025,kernel_version=5.14.0-621.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864356,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=ac475a20-bf0e-4531-bd8b-a44afde7c93f,zone_name=default,zonegroup_id=8929b431-04ce-48e1-bb4a-cedab812d97d,zonegroup_name=default}
Oct 10 09:47:03 compute-1 radosgw[84063]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Oct 10 09:47:03 compute-1 systemd[1]: Starting Ceph mds.cephfs.compute-1.fhagzt for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:47:04 compute-1 podman[84936]: 2025-10-10 09:47:04.051561897 +0000 UTC m=+0.043571047 container create 84436acd4b18010df602a41021b0f19bf4ece283c974306ae2cb358b9cb0b6bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mds-cephfs-compute-1-fhagzt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 10 09:47:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/443e4be71a481ecbb0f405ae4ab7c89730e268208b4b0cce1f920a407b62892e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:47:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/443e4be71a481ecbb0f405ae4ab7c89730e268208b4b0cce1f920a407b62892e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 09:47:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/443e4be71a481ecbb0f405ae4ab7c89730e268208b4b0cce1f920a407b62892e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 09:47:04 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/443e4be71a481ecbb0f405ae4ab7c89730e268208b4b0cce1f920a407b62892e/merged/var/lib/ceph/mds/ceph-cephfs.compute-1.fhagzt supports timestamps until 2038 (0x7fffffff)
Oct 10 09:47:04 compute-1 podman[84936]: 2025-10-10 09:47:04.125531071 +0000 UTC m=+0.117540231 container init 84436acd4b18010df602a41021b0f19bf4ece283c974306ae2cb358b9cb0b6bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mds-cephfs-compute-1-fhagzt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:47:04 compute-1 podman[84936]: 2025-10-10 09:47:04.032170589 +0000 UTC m=+0.024179759 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:47:04 compute-1 podman[84936]: 2025-10-10 09:47:04.131585446 +0000 UTC m=+0.123594596 container start 84436acd4b18010df602a41021b0f19bf4ece283c974306ae2cb358b9cb0b6bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mds-cephfs-compute-1-fhagzt, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Oct 10 09:47:04 compute-1 bash[84936]: 84436acd4b18010df602a41021b0f19bf4ece283c974306ae2cb358b9cb0b6bd
Oct 10 09:47:04 compute-1 systemd[1]: Started Ceph mds.cephfs.compute-1.fhagzt for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:47:04 compute-1 ceph-mds[84956]: set uid:gid to 167:167 (ceph:ceph)
Oct 10 09:47:04 compute-1 ceph-mds[84956]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Oct 10 09:47:04 compute-1 ceph-mds[84956]: main not setting numa affinity
Oct 10 09:47:04 compute-1 ceph-mds[84956]: pidfile_write: ignore empty --pid-file
Oct 10 09:47:04 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mds-cephfs-compute-1-fhagzt[84952]: starting mds.cephfs.compute-1.fhagzt at 
Oct 10 09:47:04 compute-1 sudo[84676]: pam_unix(sudo:session): session closed for user root
Oct 10 09:47:04 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Updating MDS map to version 7 from mon.2
Oct 10 09:47:04 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).mds e8 new map
Oct 10 09:47:04 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).mds e8 print_map
                                           e8
                                           btime 2025-10-10T09:47:04:615775+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        8
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-10T09:46:34.511367+0000
                                           modified        2025-10-10T09:47:04.295946+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24337}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24337 members: 24337
                                           [mds.cephfs.compute-2.vlgajy{0:24337} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1744510053,v1:192.168.122.102:6805/1744510053] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.cchwlo{-1:14592} state up:standby seq 1 addr [v2:192.168.122.100:6806/327772839,v1:192.168.122.100:6807/327772839] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.fhagzt{-1:24206} state up:standby seq 1 addr [v2:192.168.122.101:6804/1757766640,v1:192.168.122.101:6805/1757766640] compat {c=[1],r=[1],i=[1fff]}]
Oct 10 09:47:04 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Updating MDS map to version 8 from mon.2
Oct 10 09:47:04 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Monitors have assigned me to become a standby
Oct 10 09:47:04 compute-1 ceph-mon[79167]: pgmap v31: 167 pgs: 1 unknown, 1 creating+peering, 165 active+clean; 450 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 1.5 KiB/s wr, 8 op/s
Oct 10 09:47:04 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:04 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:04 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:04 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:04 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:04 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:04 compute-1 ceph-mon[79167]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 10 09:47:04 compute-1 ceph-mon[79167]: Cluster is now healthy
Oct 10 09:47:04 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:04 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mssvzx", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct 10 09:47:04 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mssvzx", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct 10 09:47:04 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct 10 09:47:04 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct 10 09:47:04 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:47:04 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct 10 09:47:04 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct 10 09:47:04 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mssvzx-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 10 09:47:04 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.mssvzx-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 10 09:47:04 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:47:04 compute-1 sudo[84975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:47:04 compute-1 sudo[84975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:47:04 compute-1 sudo[84975]: pam_unix(sudo:session): session closed for user root
Oct 10 09:47:04 compute-1 sudo[85001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:47:04 compute-1 sudo[85001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:47:05 compute-1 podman[85064]: 2025-10-10 09:47:05.287876808 +0000 UTC m=+0.060820666 container create 405598f1e025e95f56de7800a773795f0390f372019c7270008f63168426d196 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_lederberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct 10 09:47:05 compute-1 systemd[1]: Started libpod-conmon-405598f1e025e95f56de7800a773795f0390f372019c7270008f63168426d196.scope.
Oct 10 09:47:05 compute-1 systemd[1]: Started libcrun container.
Oct 10 09:47:05 compute-1 podman[85064]: 2025-10-10 09:47:05.267503993 +0000 UTC m=+0.040447871 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:47:05 compute-1 podman[85064]: 2025-10-10 09:47:05.371729281 +0000 UTC m=+0.144673209 container init 405598f1e025e95f56de7800a773795f0390f372019c7270008f63168426d196 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 10 09:47:05 compute-1 podman[85064]: 2025-10-10 09:47:05.38053018 +0000 UTC m=+0.153474058 container start 405598f1e025e95f56de7800a773795f0390f372019c7270008f63168426d196 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_lederberg, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:47:05 compute-1 podman[85064]: 2025-10-10 09:47:05.38416387 +0000 UTC m=+0.157107748 container attach 405598f1e025e95f56de7800a773795f0390f372019c7270008f63168426d196 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct 10 09:47:05 compute-1 reverent_lederberg[85080]: 167 167
Oct 10 09:47:05 compute-1 systemd[1]: libpod-405598f1e025e95f56de7800a773795f0390f372019c7270008f63168426d196.scope: Deactivated successfully.
Oct 10 09:47:05 compute-1 podman[85064]: 2025-10-10 09:47:05.38601587 +0000 UTC m=+0.158959708 container died 405598f1e025e95f56de7800a773795f0390f372019c7270008f63168426d196 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_lederberg, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:47:05 compute-1 systemd[1]: var-lib-containers-storage-overlay-4c9c13188d60f3db206276a0db2b67e343a32bdf48d020eefa5ed117aad858c2-merged.mount: Deactivated successfully.
Oct 10 09:47:05 compute-1 podman[85064]: 2025-10-10 09:47:05.438070998 +0000 UTC m=+0.211014876 container remove 405598f1e025e95f56de7800a773795f0390f372019c7270008f63168426d196 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_lederberg, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 09:47:05 compute-1 systemd[1]: libpod-conmon-405598f1e025e95f56de7800a773795f0390f372019c7270008f63168426d196.scope: Deactivated successfully.
Oct 10 09:47:05 compute-1 systemd[1]: Reloading.
Oct 10 09:47:05 compute-1 systemd-rc-local-generator[85125]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:47:05 compute-1 systemd-sysv-generator[85132]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:47:05 compute-1 ceph-mon[79167]: Creating key for client.nfs.cephfs.0.0.compute-1.mssvzx
Oct 10 09:47:05 compute-1 ceph-mon[79167]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Oct 10 09:47:05 compute-1 ceph-mon[79167]: Rados config object exists: conf-nfs.cephfs
Oct 10 09:47:05 compute-1 ceph-mon[79167]: Creating key for client.nfs.cephfs.0.0.compute-1.mssvzx-rgw
Oct 10 09:47:05 compute-1 ceph-mon[79167]: Bind address in nfs.cephfs.0.0.compute-1.mssvzx's ganesha conf is defaulting to empty
Oct 10 09:47:05 compute-1 ceph-mon[79167]: Deploying daemon nfs.cephfs.0.0.compute-1.mssvzx on compute-1
Oct 10 09:47:05 compute-1 ceph-mon[79167]: mds.? [v2:192.168.122.101:6804/1757766640,v1:192.168.122.101:6805/1757766640] up:boot
Oct 10 09:47:05 compute-1 ceph-mon[79167]: mds.? [v2:192.168.122.102:6804/1744510053,v1:192.168.122.102:6805/1744510053] up:active
Oct 10 09:47:05 compute-1 ceph-mon[79167]: fsmap cephfs:1 {0=cephfs.compute-2.vlgajy=up:active} 2 up:standby
Oct 10 09:47:05 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.fhagzt"}]: dispatch
Oct 10 09:47:05 compute-1 systemd[1]: Reloading.
Oct 10 09:47:05 compute-1 systemd-rc-local-generator[85167]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:47:05 compute-1 systemd-sysv-generator[85171]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:47:06 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:47:06 compute-1 podman[85226]: 2025-10-10 09:47:06.513858098 +0000 UTC m=+0.059953333 container create 2391dd632d14ec9648c3d8d1edd069f6584c3097475f03ae8ea909b98a6066a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:47:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32b88a7cec485365e9b39c695c6cd554fe2d4deeb9799c6b37cc487351d505c2/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 09:47:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32b88a7cec485365e9b39c695c6cd554fe2d4deeb9799c6b37cc487351d505c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:47:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32b88a7cec485365e9b39c695c6cd554fe2d4deeb9799c6b37cc487351d505c2/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:47:06 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32b88a7cec485365e9b39c695c6cd554fe2d4deeb9799c6b37cc487351d505c2/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.mssvzx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:47:06 compute-1 podman[85226]: 2025-10-10 09:47:06.493254007 +0000 UTC m=+0.039349252 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:47:06 compute-1 podman[85226]: 2025-10-10 09:47:06.612138984 +0000 UTC m=+0.158234249 container init 2391dd632d14ec9648c3d8d1edd069f6584c3097475f03ae8ea909b98a6066a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1)
Oct 10 09:47:06 compute-1 podman[85226]: 2025-10-10 09:47:06.62189399 +0000 UTC m=+0.167989235 container start 2391dd632d14ec9648c3d8d1edd069f6584c3097475f03ae8ea909b98a6066a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 10 09:47:06 compute-1 bash[85226]: 2391dd632d14ec9648c3d8d1edd069f6584c3097475f03ae8ea909b98a6066a6
Oct 10 09:47:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:06 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 09:47:06 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:47:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:06 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 09:47:06 compute-1 ceph-mon[79167]: pgmap v32: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 241 KiB/s rd, 9.4 KiB/s wr, 445 op/s
Oct 10 09:47:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:06 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 09:47:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:06 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 09:47:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:06 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 09:47:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:06 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 09:47:06 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).mds e9 new map
Oct 10 09:47:06 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).mds e9 print_map
                                           e9
                                           btime 2025-10-10T09:47:06:672904+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        8
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-10T09:46:34.511367+0000
                                           modified        2025-10-10T09:47:04.295946+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24337}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24337 members: 24337
                                           [mds.cephfs.compute-2.vlgajy{0:24337} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1744510053,v1:192.168.122.102:6805/1744510053] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.cchwlo{-1:14592} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/327772839,v1:192.168.122.100:6807/327772839] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.fhagzt{-1:24206} state up:standby seq 1 addr [v2:192.168.122.101:6804/1757766640,v1:192.168.122.101:6805/1757766640] compat {c=[1],r=[1],i=[1fff]}]
Oct 10 09:47:06 compute-1 sudo[85001]: pam_unix(sudo:session): session closed for user root
Oct 10 09:47:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:06 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 09:47:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:06 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 09:47:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:06 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Oct 10 09:47:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:06 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Oct 10 09:47:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:06 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 09:47:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:06 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 09:47:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:47:07 compute-1 ceph-mon[79167]: mds.? [v2:192.168.122.100:6806/327772839,v1:192.168.122.100:6807/327772839] up:standby
Oct 10 09:47:07 compute-1 ceph-mon[79167]: fsmap cephfs:1 {0=cephfs.compute-2.vlgajy=up:active} 2 up:standby
Oct 10 09:47:07 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:07 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:07 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:07 compute-1 ceph-mon[79167]: Creating key for client.nfs.cephfs.1.0.compute-2.boccfy
Oct 10 09:47:07 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.boccfy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct 10 09:47:07 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.boccfy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct 10 09:47:07 compute-1 ceph-mon[79167]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Oct 10 09:47:07 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct 10 09:47:07 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct 10 09:47:07 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:47:08 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).mds e10 new map
Oct 10 09:47:08 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).mds e10 print_map
                                           e10
                                           btime 2025-10-10T09:47:08:789045+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        8
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-10-10T09:46:34.511367+0000
                                           modified        2025-10-10T09:47:04.295946+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24337}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24337 members: 24337
                                           [mds.cephfs.compute-2.vlgajy{0:24337} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1744510053,v1:192.168.122.102:6805/1744510053] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.cchwlo{-1:14592} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/327772839,v1:192.168.122.100:6807/327772839] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.fhagzt{-1:24206} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/1757766640,v1:192.168.122.101:6805/1757766640] compat {c=[1],r=[1],i=[1fff]}]
Oct 10 09:47:08 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Updating MDS map to version 10 from mon.2
Oct 10 09:47:08 compute-1 ceph-mon[79167]: pgmap v33: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 204 KiB/s rd, 8.0 KiB/s wr, 376 op/s
Oct 10 09:47:08 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000001:nfs.cephfs.0: -2
Oct 10 09:47:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 09:47:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 09:47:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 09:47:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 09:47:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 09:47:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 09:47:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 09:47:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 09:47:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 09:47:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 09:47:09 compute-1 ceph-mon[79167]: mds.? [v2:192.168.122.101:6804/1757766640,v1:192.168.122.101:6805/1757766640] up:standby
Oct 10 09:47:09 compute-1 ceph-mon[79167]: fsmap cephfs:1 {0=cephfs.compute-2.vlgajy=up:active} 2 up:standby
Oct 10 09:47:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 09:47:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 09:47:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 09:47:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 09:47:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 09:47:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 09:47:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 09:47:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 09:47:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 09:47:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 09:47:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 09:47:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 09:47:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 09:47:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 09:47:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 09:47:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 09:47:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 09:47:10 compute-1 ceph-mon[79167]: pgmap v34: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 160 KiB/s rd, 6.2 KiB/s wr, 295 op/s
Oct 10 09:47:10 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct 10 09:47:10 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct 10 09:47:10 compute-1 ceph-mon[79167]: Rados config object exists: conf-nfs.cephfs
Oct 10 09:47:10 compute-1 ceph-mon[79167]: Creating key for client.nfs.cephfs.1.0.compute-2.boccfy-rgw
Oct 10 09:47:10 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.boccfy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 10 09:47:10 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.boccfy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 10 09:47:10 compute-1 ceph-mon[79167]: Bind address in nfs.cephfs.1.0.compute-2.boccfy's ganesha conf is defaulting to empty
Oct 10 09:47:10 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:47:10 compute-1 ceph-mon[79167]: Deploying daemon nfs.cephfs.1.0.compute-2.boccfy on compute-2
Oct 10 09:47:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:11 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 09:47:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:11 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 09:47:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:11 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 09:47:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:11 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 09:47:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:47:12 compute-1 ceph-mon[79167]: pgmap v35: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 162 KiB/s rd, 6.1 KiB/s wr, 299 op/s
Oct 10 09:47:12 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:12 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:12 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:12 compute-1 ceph-mon[79167]: Creating key for client.nfs.cephfs.2.0.compute-0.ruydzo
Oct 10 09:47:12 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ruydzo", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct 10 09:47:12 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ruydzo", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct 10 09:47:12 compute-1 ceph-mon[79167]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Oct 10 09:47:12 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct 10 09:47:12 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct 10 09:47:12 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:47:12 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct 10 09:47:12 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct 10 09:47:12 compute-1 ceph-mon[79167]: Rados config object exists: conf-nfs.cephfs
Oct 10 09:47:12 compute-1 ceph-mon[79167]: Creating key for client.nfs.cephfs.2.0.compute-0.ruydzo-rgw
Oct 10 09:47:12 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ruydzo-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 10 09:47:12 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ruydzo-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 10 09:47:12 compute-1 ceph-mon[79167]: Bind address in nfs.cephfs.2.0.compute-0.ruydzo's ganesha conf is defaulting to empty
Oct 10 09:47:12 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:47:12 compute-1 ceph-mon[79167]: Deploying daemon nfs.cephfs.2.0.compute-0.ruydzo on compute-0
Oct 10 09:47:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:13 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 09:47:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:13 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 09:47:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:13 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 09:47:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:13 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 09:47:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:13 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 09:47:14 compute-1 sudo[85295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:47:14 compute-1 sudo[85295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:47:14 compute-1 sudo[85295]: pam_unix(sudo:session): session closed for user root
Oct 10 09:47:14 compute-1 sudo[85320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:47:14 compute-1 sudo[85320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:47:14 compute-1 ceph-mon[79167]: pgmap v36: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 146 KiB/s rd, 5.6 KiB/s wr, 270 op/s
Oct 10 09:47:14 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:14 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:14 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:14 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:14 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:14 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:14 compute-1 ceph-mon[79167]: Deploying daemon haproxy.nfs.cephfs.compute-1.ehhoyw on compute-1
Oct 10 09:47:16 compute-1 ceph-mon[79167]: pgmap v37: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 130 KiB/s rd, 6.6 KiB/s wr, 239 op/s
Oct 10 09:47:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:47:17 compute-1 podman[85385]: 2025-10-10 09:47:17.443696862 +0000 UTC m=+2.828421610 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct 10 09:47:17 compute-1 podman[85385]: 2025-10-10 09:47:17.465420113 +0000 UTC m=+2.850144791 container create 2802889e2879f73e699857bfc0f09fd032dd6db8660984e88d8ef1406650dd32 (image=quay.io/ceph/haproxy:2.3, name=trusting_antonelli)
Oct 10 09:47:17 compute-1 systemd[1]: Started libpod-conmon-2802889e2879f73e699857bfc0f09fd032dd6db8660984e88d8ef1406650dd32.scope.
Oct 10 09:47:17 compute-1 systemd[1]: Started libcrun container.
Oct 10 09:47:17 compute-1 podman[85385]: 2025-10-10 09:47:17.551317202 +0000 UTC m=+2.936041960 container init 2802889e2879f73e699857bfc0f09fd032dd6db8660984e88d8ef1406650dd32 (image=quay.io/ceph/haproxy:2.3, name=trusting_antonelli)
Oct 10 09:47:17 compute-1 podman[85385]: 2025-10-10 09:47:17.563390331 +0000 UTC m=+2.948115029 container start 2802889e2879f73e699857bfc0f09fd032dd6db8660984e88d8ef1406650dd32 (image=quay.io/ceph/haproxy:2.3, name=trusting_antonelli)
Oct 10 09:47:17 compute-1 podman[85385]: 2025-10-10 09:47:17.567411091 +0000 UTC m=+2.952135789 container attach 2802889e2879f73e699857bfc0f09fd032dd6db8660984e88d8ef1406650dd32 (image=quay.io/ceph/haproxy:2.3, name=trusting_antonelli)
Oct 10 09:47:17 compute-1 trusting_antonelli[85502]: 0 0
Oct 10 09:47:17 compute-1 systemd[1]: libpod-2802889e2879f73e699857bfc0f09fd032dd6db8660984e88d8ef1406650dd32.scope: Deactivated successfully.
Oct 10 09:47:17 compute-1 conmon[85502]: conmon 2802889e2879f73e6998 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2802889e2879f73e699857bfc0f09fd032dd6db8660984e88d8ef1406650dd32.scope/container/memory.events
Oct 10 09:47:17 compute-1 podman[85507]: 2025-10-10 09:47:17.610549265 +0000 UTC m=+0.028285031 container died 2802889e2879f73e699857bfc0f09fd032dd6db8660984e88d8ef1406650dd32 (image=quay.io/ceph/haproxy:2.3, name=trusting_antonelli)
Oct 10 09:47:17 compute-1 systemd[1]: var-lib-containers-storage-overlay-539fc5640b4738cd9075a771a9d807e4902716255b4160d5c8e9dc269ef3517a-merged.mount: Deactivated successfully.
Oct 10 09:47:17 compute-1 podman[85507]: 2025-10-10 09:47:17.656906647 +0000 UTC m=+0.074642413 container remove 2802889e2879f73e699857bfc0f09fd032dd6db8660984e88d8ef1406650dd32 (image=quay.io/ceph/haproxy:2.3, name=trusting_antonelli)
Oct 10 09:47:17 compute-1 systemd[1]: libpod-conmon-2802889e2879f73e699857bfc0f09fd032dd6db8660984e88d8ef1406650dd32.scope: Deactivated successfully.
Oct 10 09:47:17 compute-1 systemd[1]: Reloading.
Oct 10 09:47:17 compute-1 systemd-rc-local-generator[85554]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:47:17 compute-1 systemd-sysv-generator[85558]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:47:18 compute-1 systemd[1]: Reloading.
Oct 10 09:47:18 compute-1 systemd-sysv-generator[85601]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:47:18 compute-1 systemd-rc-local-generator[85597]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:47:18 compute-1 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-1.ehhoyw for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:47:18 compute-1 podman[85655]: 2025-10-10 09:47:18.687458806 +0000 UTC m=+0.060999162 container create 7ce1be4884f313700b9f3fe6c66e14b163c4d6bf31089a45b751a6389f8be562 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw)
Oct 10 09:47:18 compute-1 ceph-mon[79167]: pgmap v38: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.4 KiB/s wr, 42 op/s
Oct 10 09:47:18 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:18 compute-1 podman[85655]: 2025-10-10 09:47:18.655881506 +0000 UTC m=+0.029421902 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct 10 09:47:18 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9184a1463110283b415ebe1aaffb56f883db14b6210305024f4070f5289d465f/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Oct 10 09:47:18 compute-1 podman[85655]: 2025-10-10 09:47:18.796484544 +0000 UTC m=+0.170024950 container init 7ce1be4884f313700b9f3fe6c66e14b163c4d6bf31089a45b751a6389f8be562 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw)
Oct 10 09:47:18 compute-1 podman[85655]: 2025-10-10 09:47:18.807104093 +0000 UTC m=+0.180644439 container start 7ce1be4884f313700b9f3fe6c66e14b163c4d6bf31089a45b751a6389f8be562 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw)
Oct 10 09:47:18 compute-1 bash[85655]: 7ce1be4884f313700b9f3fe6c66e14b163c4d6bf31089a45b751a6389f8be562
Oct 10 09:47:18 compute-1 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-1.ehhoyw for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:47:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [NOTICE] 282/094718 (2) : New worker #1 (4) forked
Oct 10 09:47:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:18 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a8000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:18 compute-1 sudo[85320]: pam_unix(sudo:session): session closed for user root
Oct 10 09:47:19 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:19 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:19 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:19 compute-1 ceph-mon[79167]: Deploying daemon haproxy.nfs.cephfs.compute-0.gptveb on compute-0
Oct 10 09:47:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:20 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0001c40 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:20 compute-1 ceph-mon[79167]: pgmap v39: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.4 KiB/s wr, 42 op/s
Oct 10 09:47:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:47:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:22 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390000fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:22 compute-1 ceph-mon[79167]: pgmap v40: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.4 KiB/s wr, 42 op/s
Oct 10 09:47:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:23 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:24 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:24 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:24 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:24 compute-1 ceph-mon[79167]: Deploying daemon haproxy.nfs.cephfs.compute-2.eokdol on compute-2
Oct 10 09:47:24 compute-1 ceph-mon[79167]: pgmap v41: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 5.1 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Oct 10 09:47:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:24 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:25 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:26 compute-1 ceph-mon[79167]: pgmap v42: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Oct 10 09:47:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:26 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:47:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:27 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388001140 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:27 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388001140 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:28 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Oct 10 09:47:28 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:28.418467) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 09:47:28 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Oct 10 09:47:28 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089648418681, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 6471, "num_deletes": 255, "total_data_size": 17860428, "memory_usage": 19297984, "flush_reason": "Manual Compaction"}
Oct 10 09:47:28 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Oct 10 09:47:28 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089648485420, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 11386860, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 6476, "table_properties": {"data_size": 11362441, "index_size": 15281, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8069, "raw_key_size": 78408, "raw_average_key_size": 24, "raw_value_size": 11300778, "raw_average_value_size": 3507, "num_data_blocks": 678, "num_entries": 3222, "num_filter_entries": 3222, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 1760089522, "file_creation_time": 1760089648, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Oct 10 09:47:28 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 67016 microseconds, and 38051 cpu microseconds.
Oct 10 09:47:28 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:28.485501) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 11386860 bytes OK
Oct 10 09:47:28 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:28.485527) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Oct 10 09:47:28 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:28.486933) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Oct 10 09:47:28 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:28.486956) EVENT_LOG_v1 {"time_micros": 1760089648486949, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Oct 10 09:47:28 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:28.486976) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Oct 10 09:47:28 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 17825352, prev total WAL file size 17825352, number of live WAL files 2.
Oct 10 09:47:28 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 09:47:28 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:28.494106) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323532' seq:0, type:0; will stop at (end)
Oct 10 09:47:28 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Oct 10 09:47:28 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(10MB) 8(1648B)]
Oct 10 09:47:28 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089648494250, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 11388508, "oldest_snapshot_seqno": -1}
Oct 10 09:47:28 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:28 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:28 compute-1 ceph-mon[79167]: pgmap v43: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:47:28 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:28 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:28 compute-1 ceph-mon[79167]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 10 09:47:28 compute-1 ceph-mon[79167]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 10 09:47:28 compute-1 ceph-mon[79167]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 10 09:47:28 compute-1 ceph-mon[79167]: Deploying daemon keepalived.nfs.cephfs.compute-2.fcbgvm on compute-2
Oct 10 09:47:28 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 2971 keys, 11383426 bytes, temperature: kUnknown
Oct 10 09:47:28 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089648556587, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 11383426, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11359573, "index_size": 15296, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7493, "raw_key_size": 74983, "raw_average_key_size": 25, "raw_value_size": 11301058, "raw_average_value_size": 3803, "num_data_blocks": 677, "num_entries": 2971, "num_filter_entries": 2971, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760089648, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Oct 10 09:47:28 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 09:47:28 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:28.556927) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 11383426 bytes
Oct 10 09:47:28 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:28.558769) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.3 rd, 182.2 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(10.9, 0.0 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3227, records dropped: 256 output_compression: NoCompression
Oct 10 09:47:28 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:28.558803) EVENT_LOG_v1 {"time_micros": 1760089648558787, "job": 4, "event": "compaction_finished", "compaction_time_micros": 62466, "compaction_time_cpu_micros": 29125, "output_level": 6, "num_output_files": 1, "total_output_size": 11383426, "num_input_records": 3227, "num_output_records": 2971, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 09:47:28 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 09:47:28 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089648562595, "job": 4, "event": "table_file_deletion", "file_number": 14}
Oct 10 09:47:28 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 09:47:28 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089648562680, "job": 4, "event": "table_file_deletion", "file_number": 8}
Oct 10 09:47:28 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:28.493956) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:47:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:28 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:29 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:29 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388001140 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:30 compute-1 ceph-mon[79167]: pgmap v44: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:47:30 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:30 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388001140 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:31 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:31 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:47:32 compute-1 ceph-mon[79167]: pgmap v45: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:47:32 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:32 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:32 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:32 compute-1 sudo[85688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:47:32 compute-1 sudo[85688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:47:32 compute-1 sudo[85688]: pam_unix(sudo:session): session closed for user root
Oct 10 09:47:32 compute-1 sudo[85713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:47:32 compute-1 sudo[85713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:47:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:32 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:33 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384001e80 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:33 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:33 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e52 e52: 3 total, 3 up, 3 in
Oct 10 09:47:33 compute-1 ceph-mon[79167]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 10 09:47:33 compute-1 ceph-mon[79167]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 10 09:47:33 compute-1 ceph-mon[79167]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 10 09:47:33 compute-1 ceph-mon[79167]: Deploying daemon keepalived.nfs.cephfs.compute-1.twbftp on compute-1
Oct 10 09:47:33 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 09:47:34 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e53 e53: 3 total, 3 up, 3 in
Oct 10 09:47:34 compute-1 ceph-mon[79167]: pgmap v46: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:47:34 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct 10 09:47:34 compute-1 ceph-mon[79167]: osdmap e52: 3 total, 3 up, 3 in
Oct 10 09:47:34 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 09:47:34 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:34 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388001140 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:35 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390003340 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:35 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384001e80 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:35 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e54 e54: 3 total, 3 up, 3 in
Oct 10 09:47:35 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Oct 10 09:47:35 compute-1 ceph-mon[79167]: osdmap e53: 3 total, 3 up, 3 in
Oct 10 09:47:35 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 09:47:35 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 09:47:35 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 09:47:35 compute-1 podman[85779]: 2025-10-10 09:47:35.936597344 +0000 UTC m=+2.680742729 container create a1d3848cab1a649d3414e4659d1fec83df917c40c03bd477a88301228615fcc7 (image=quay.io/ceph/keepalived:2.2.4, name=inspiring_jepsen, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, name=keepalived, io.openshift.tags=Ceph keepalived, distribution-scope=public, version=2.2.4)
Oct 10 09:47:35 compute-1 systemd[1]: Started libpod-conmon-a1d3848cab1a649d3414e4659d1fec83df917c40c03bd477a88301228615fcc7.scope.
Oct 10 09:47:35 compute-1 podman[85779]: 2025-10-10 09:47:35.914944584 +0000 UTC m=+2.659090009 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct 10 09:47:36 compute-1 systemd[1]: Started libcrun container.
Oct 10 09:47:36 compute-1 podman[85779]: 2025-10-10 09:47:36.033485771 +0000 UTC m=+2.777631166 container init a1d3848cab1a649d3414e4659d1fec83df917c40c03bd477a88301228615fcc7 (image=quay.io/ceph/keepalived:2.2.4, name=inspiring_jepsen, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, com.redhat.component=keepalived-container, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, distribution-scope=public, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Oct 10 09:47:36 compute-1 podman[85779]: 2025-10-10 09:47:36.04479979 +0000 UTC m=+2.788945205 container start a1d3848cab1a649d3414e4659d1fec83df917c40c03bd477a88301228615fcc7 (image=quay.io/ceph/keepalived:2.2.4, name=inspiring_jepsen, release=1793, version=2.2.4, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, description=keepalived for Ceph)
Oct 10 09:47:36 compute-1 podman[85779]: 2025-10-10 09:47:36.04921363 +0000 UTC m=+2.793359025 container attach a1d3848cab1a649d3414e4659d1fec83df917c40c03bd477a88301228615fcc7 (image=quay.io/ceph/keepalived:2.2.4, name=inspiring_jepsen, io.openshift.tags=Ceph keepalived, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, vendor=Red Hat, Inc., description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph.)
Oct 10 09:47:36 compute-1 inspiring_jepsen[85874]: 0 0
Oct 10 09:47:36 compute-1 systemd[1]: libpod-a1d3848cab1a649d3414e4659d1fec83df917c40c03bd477a88301228615fcc7.scope: Deactivated successfully.
Oct 10 09:47:36 compute-1 podman[85779]: 2025-10-10 09:47:36.054659678 +0000 UTC m=+2.798805093 container died a1d3848cab1a649d3414e4659d1fec83df917c40c03bd477a88301228615fcc7 (image=quay.io/ceph/keepalived:2.2.4, name=inspiring_jepsen, release=1793, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, version=2.2.4, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, architecture=x86_64, name=keepalived, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 10 09:47:36 compute-1 systemd[1]: var-lib-containers-storage-overlay-c3d0f156852c5eac93778730723a2ee92bee8375bd914e5d68babdf4a608dfe3-merged.mount: Deactivated successfully.
Oct 10 09:47:36 compute-1 podman[85779]: 2025-10-10 09:47:36.090187186 +0000 UTC m=+2.834332551 container remove a1d3848cab1a649d3414e4659d1fec83df917c40c03bd477a88301228615fcc7 (image=quay.io/ceph/keepalived:2.2.4, name=inspiring_jepsen, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, architecture=x86_64, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Oct 10 09:47:36 compute-1 systemd[1]: libpod-conmon-a1d3848cab1a649d3414e4659d1fec83df917c40c03bd477a88301228615fcc7.scope: Deactivated successfully.
Oct 10 09:47:36 compute-1 systemd[1]: Reloading.
Oct 10 09:47:36 compute-1 systemd-rc-local-generator[85922]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:47:36 compute-1 systemd-sysv-generator[85926]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:47:36 compute-1 systemd[1]: Reloading.
Oct 10 09:47:36 compute-1 systemd-rc-local-generator[85959]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:47:36 compute-1 systemd-sysv-generator[85965]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:47:36 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e55 e55: 3 total, 3 up, 3 in
Oct 10 09:47:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 54 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=54 pruub=11.137809753s) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active pruub 172.869598389s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=54 pruub=11.137809753s) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown pruub 172.869598389s@ mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.5( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.1( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.4( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.3( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.2( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.6( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.7( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.8( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.9( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.a( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.b( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.c( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.d( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.e( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.f( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.10( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.11( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.12( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.13( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.14( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.15( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.16( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.17( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.18( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.19( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.1a( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.1b( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.1c( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.1d( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.1e( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 55 pg[7.1f( empty local-lis/les=22/23 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:36 compute-1 ceph-mon[79167]: pgmap v49: 167 pgs: 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Oct 10 09:47:36 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct 10 09:47:36 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 09:47:36 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 09:47:36 compute-1 ceph-mon[79167]: osdmap e54: 3 total, 3 up, 3 in
Oct 10 09:47:36 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 09:47:36 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct 10 09:47:36 compute-1 ceph-mon[79167]: osdmap e55: 3 total, 3 up, 3 in
Oct 10 09:47:36 compute-1 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-1.twbftp for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:47:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:36 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:37 compute-1 podman[86019]: 2025-10-10 09:47:37.1400784 +0000 UTC m=+0.070055648 container create 8efe4c511a9ca69c4fbe77af128c823fd37bedd1e530d3bfb1fc1044e25a5138 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp, version=2.2.4, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Oct 10 09:47:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:47:37 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/813f4f3e98515d1f49d118a58d4d31316c669b35b8f3d9d42503c7dcdcd53760/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:47:37 compute-1 podman[86019]: 2025-10-10 09:47:37.202141861 +0000 UTC m=+0.132119149 container init 8efe4c511a9ca69c4fbe77af128c823fd37bedd1e530d3bfb1fc1044e25a5138 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, vcs-type=git, distribution-scope=public, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, com.redhat.component=keepalived-container, io.openshift.expose-services=, build-date=2023-02-22T09:23:20)
Oct 10 09:47:37 compute-1 podman[86019]: 2025-10-10 09:47:37.208533175 +0000 UTC m=+0.138510433 container start 8efe4c511a9ca69c4fbe77af128c823fd37bedd1e530d3bfb1fc1044e25a5138 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp, description=keepalived for Ceph, distribution-scope=public, architecture=x86_64, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, vendor=Red Hat, Inc., version=2.2.4, build-date=2023-02-22T09:23:20, release=1793)
Oct 10 09:47:37 compute-1 podman[86019]: 2025-10-10 09:47:37.113356983 +0000 UTC m=+0.043334261 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct 10 09:47:37 compute-1 bash[86019]: 8efe4c511a9ca69c4fbe77af128c823fd37bedd1e530d3bfb1fc1044e25a5138
Oct 10 09:47:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:37 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3880030f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:37 compute-1 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-1.twbftp for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:47:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp[86034]: Fri Oct 10 09:47:37 2025: Starting Keepalived v2.2.4 (08/21,2021)
Oct 10 09:47:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp[86034]: Fri Oct 10 09:47:37 2025: Running on Linux 5.14.0-621.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Tue Sep 30 07:37:35 UTC 2025 (built for Linux 5.14.0)
Oct 10 09:47:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp[86034]: Fri Oct 10 09:47:37 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Oct 10 09:47:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp[86034]: Fri Oct 10 09:47:37 2025: Configuration file /etc/keepalived/keepalived.conf
Oct 10 09:47:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp[86034]: Fri Oct 10 09:47:37 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Oct 10 09:47:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp[86034]: Fri Oct 10 09:47:37 2025: Starting VRRP child process, pid=4
Oct 10 09:47:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp[86034]: Fri Oct 10 09:47:37 2025: Startup complete
Oct 10 09:47:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp[86034]: Fri Oct 10 09:47:37 2025: (VI_0) Entering BACKUP STATE (init)
Oct 10 09:47:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp[86034]: Fri Oct 10 09:47:37 2025: VRRP_Script(check_backend) succeeded
Oct 10 09:47:37 compute-1 sudo[85713]: pam_unix(sudo:session): session closed for user root
Oct 10 09:47:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:37 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390003340 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e56 e56: 3 total, 3 up, 3 in
Oct 10 09:47:37 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[10.0( v 51'1091 (0'0,51'1091] local-lis/les=45/46 n=178 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=8.400810242s) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 51'1090 mlcod 51'1090 active pruub 171.151397705s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:37 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.1c( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:37 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.1f( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:37 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.1d( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:37 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.12( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:37 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.a( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:37 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 09:47:37 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:37 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:37 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:37 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 09:47:37 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 09:47:37 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct 10 09:47:37 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Oct 10 09:47:37 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 09:47:37 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 09:47:37 compute-1 ceph-mon[79167]: osdmap e56: 3 total, 3 up, 3 in
Oct 10 09:47:37 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.13( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:37 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.11( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:37 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.16( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:37 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.10( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:37 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.15( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:37 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.b( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:37 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.14( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:37 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.8( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:37 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.9( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:37 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.17( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:37 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.e( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:37 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.5( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:37 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.0( empty local-lis/les=54/56 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:37 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.7( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:37 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.4( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:37 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.1( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:37 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.3( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:37 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.d( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:37 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.2( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:37 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.6( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:37 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.c( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:37 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.19( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:37 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.1e( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:37 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.18( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:37 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.1b( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:37 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.1a( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:37 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[7.f( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=22/22 les/c/f=23/23/0 sis=54) [1] r=0 lpr=54 pi=[22,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:37 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Oct 10 09:47:37 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 56 pg[10.0( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=5 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=56 pruub=8.400810242s) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 51'1090 mlcod 0'0 unknown pruub 171.151397705s@ mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:37 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096dab248 space 0x55b096d265c0 0x0~1000 clean)
Oct 10 09:47:37 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096ddf388 space 0x55b096d27390 0x0~1000 clean)
Oct 10 09:47:37 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096ddf068 space 0x55b096d26f80 0x0~1000 clean)
Oct 10 09:47:37 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096d9e348 space 0x55b096d272c0 0x0~1000 clean)
Oct 10 09:47:37 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096d9ff68 space 0x55b096de3120 0x0~1000 clean)
Oct 10 09:47:37 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096dc3c48 space 0x55b096d260e0 0x0~1000 clean)
Oct 10 09:47:37 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096d4d248 space 0x55b096d361b0 0x0~1000 clean)
Oct 10 09:47:37 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096d827a8 space 0x55b096d0be20 0x0~1000 clean)
Oct 10 09:47:37 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096d9f248 space 0x55b096d26690 0x0~1000 clean)
Oct 10 09:47:37 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096d837e8 space 0x55b096d276d0 0x0~1000 clean)
Oct 10 09:47:37 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096dc3ba8 space 0x55b096d26aa0 0x0~1000 clean)
Oct 10 09:47:37 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096d9f928 space 0x55b096d36010 0x0~1000 clean)
Oct 10 09:47:37 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096d82ca8 space 0x55b096d27460 0x0~1000 clean)
Oct 10 09:47:37 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096dc3108 space 0x55b096c9b460 0x0~1000 clean)
Oct 10 09:47:37 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096d9eca8 space 0x55b096de2760 0x0~1000 clean)
Oct 10 09:47:37 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096dd7f68 space 0x55b096d26b70 0x0~1000 clean)
Oct 10 09:47:37 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096d4cc08 space 0x55b096d27a10 0x0~1000 clean)
Oct 10 09:47:37 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096d831a8 space 0x55b096d27600 0x0~1000 clean)
Oct 10 09:47:37 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096daae88 space 0x55b0966609d0 0x0~1000 clean)
Oct 10 09:47:37 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096ddf6a8 space 0x55b096d27050 0x0~1000 clean)
Oct 10 09:47:37 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096ddfba8 space 0x55b096d27120 0x0~1000 clean)
Oct 10 09:47:37 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096d9fc48 space 0x55b096de2900 0x0~1000 clean)
Oct 10 09:47:37 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096dabd88 space 0x55b096c9b530 0x0~1000 clean)
Oct 10 09:47:37 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096db0168 space 0x55b096c781b0 0x0~1000 clean)
Oct 10 09:47:37 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096d4c668 space 0x55b096d27870 0x0~1000 clean)
Oct 10 09:47:37 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096d83e28 space 0x55b096d277a0 0x0~1000 clean)
Oct 10 09:47:37 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096ddeb68 space 0x55b096d26eb0 0x0~1000 clean)
Oct 10 09:47:37 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096d9e848 space 0x55b096d26760 0x0~1000 clean)
Oct 10 09:47:37 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096daade8 space 0x55b095aef7a0 0x0~1000 clean)
Oct 10 09:47:37 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1).collection(10.0_head 0x55b096422fc0) operator()   moving buffer(0x55b096d9e3e8 space 0x55b096d360e0 0x0~1000 clean)
Oct 10 09:47:38 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e57 e57: 3 total, 3 up, 3 in
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.1b( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.18( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.12( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.1f( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.7( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.10( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.11( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.1e( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.1d( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.1c( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.1a( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.19( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.6( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.5( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.4( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.3( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.8( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.d( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.b( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.9( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.c( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.e( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.f( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.1( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.2( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.13( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.14( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.15( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.a( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.18( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.16( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.17( v 51'1091 lc 0'0 (0'0,51'1091] local-lis/les=45/46 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:38 compute-1 ceph-mon[79167]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 10 09:47:38 compute-1 ceph-mon[79167]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 10 09:47:38 compute-1 ceph-mon[79167]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 10 09:47:38 compute-1 ceph-mon[79167]: Deploying daemon keepalived.nfs.cephfs.compute-0.mciijj on compute-0
Oct 10 09:47:38 compute-1 ceph-mon[79167]: pgmap v52: 229 pgs: 62 unknown, 167 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Oct 10 09:47:38 compute-1 ceph-mon[79167]: 8.6 deep-scrub starts
Oct 10 09:47:38 compute-1 ceph-mon[79167]: 8.6 deep-scrub ok
Oct 10 09:47:38 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 10 09:47:38 compute-1 ceph-mon[79167]: 7.1c scrub starts
Oct 10 09:47:38 compute-1 ceph-mon[79167]: 7.1c scrub ok
Oct 10 09:47:38 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct 10 09:47:38 compute-1 ceph-mon[79167]: osdmap e57: 3 total, 3 up, 3 in
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.12( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.1d( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.10( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.1c( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.1a( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.6( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.5( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.11( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.3( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.4( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.7( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.8( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.d( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.9( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.b( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.0( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 51'1090 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.c( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.e( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.1( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.2( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.14( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.15( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.13( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.a( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.16( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 57 pg[10.17( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=45/45 les/c/f=46/46/0 sis=56) [1] r=0 lpr=56 pi=[45,56)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:38 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Oct 10 09:47:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:38 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384002f00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:39 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:39 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3880030f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:39 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Oct 10 09:47:39 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Oct 10 09:47:39 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e58 e58: 3 total, 3 up, 3 in
Oct 10 09:47:39 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 58 pg[12.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=10.429636002s) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active pruub 175.213150024s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:39 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 58 pg[12.0( empty local-lis/les=49/50 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=58 pruub=10.429636002s) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown pruub 175.213150024s@ mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:39 compute-1 ceph-mon[79167]: 8.10 scrub starts
Oct 10 09:47:39 compute-1 ceph-mon[79167]: 8.10 scrub ok
Oct 10 09:47:39 compute-1 ceph-mon[79167]: 7.1f scrub starts
Oct 10 09:47:39 compute-1 ceph-mon[79167]: 7.1f scrub ok
Oct 10 09:47:39 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 09:47:39 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 10 09:47:40 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Oct 10 09:47:40 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Oct 10 09:47:40 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e59 e59: 3 total, 3 up, 3 in
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.11( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.10( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.13( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.12( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.15( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.4( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.7( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.6( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.9( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.8( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.a( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.f( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.c( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.b( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.e( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.d( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.5( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.2( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.3( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.1f( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.1c( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.1a( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.1b( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.18( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.19( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.16( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.14( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.1( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.1e( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.1d( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.17( empty local-lis/les=49/50 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.11( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.13( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.10( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.12( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.4( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.15( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.6( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.8( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.9( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.a( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.7( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.f( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.b( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.c( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.d( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.2( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.3( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.5( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.e( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.0( empty local-lis/les=58/59 n=0 ec=49/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.1c( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.1f( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.1a( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.19( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.18( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.14( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.16( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.1( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.1d( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.1b( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.1e( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 59 pg[12.17( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=49/49 les/c/f=50/50/0 sis=58) [1] r=0 lpr=58 pi=[49,58)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:40 compute-1 ceph-mon[79167]: pgmap v55: 291 pgs: 62 unknown, 229 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:47:40 compute-1 ceph-mon[79167]: 8.11 scrub starts
Oct 10 09:47:40 compute-1 ceph-mon[79167]: 8.11 scrub ok
Oct 10 09:47:40 compute-1 ceph-mon[79167]: 7.1d scrub starts
Oct 10 09:47:40 compute-1 ceph-mon[79167]: 7.1d scrub ok
Oct 10 09:47:40 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 09:47:40 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 10 09:47:40 compute-1 ceph-mon[79167]: osdmap e58: 3 total, 3 up, 3 in
Oct 10 09:47:40 compute-1 ceph-mon[79167]: osdmap e59: 3 total, 3 up, 3 in
Oct 10 09:47:40 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:40 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3880030f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:40 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp[86034]: Fri Oct 10 09:47:40 2025: (VI_0) Entering MASTER STATE
Oct 10 09:47:40 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp[86034]: Fri Oct 10 09:47:40 2025: (VI_0) Master received advert from 192.168.122.102 with same priority 90 but higher IP address than ours
Oct 10 09:47:40 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp[86034]: Fri Oct 10 09:47:40 2025: (VI_0) Entering BACKUP STATE
Oct 10 09:47:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:41 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384002f00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:41 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:41 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.a scrub starts
Oct 10 09:47:41 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.a scrub ok
Oct 10 09:47:41 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e60 e60: 3 total, 3 up, 3 in
Oct 10 09:47:41 compute-1 ceph-mon[79167]: 8.14 scrub starts
Oct 10 09:47:41 compute-1 ceph-mon[79167]: 8.14 scrub ok
Oct 10 09:47:41 compute-1 ceph-mon[79167]: 7.12 scrub starts
Oct 10 09:47:41 compute-1 ceph-mon[79167]: 7.12 scrub ok
Oct 10 09:47:41 compute-1 ceph-mon[79167]: osdmap e60: 3 total, 3 up, 3 in
Oct 10 09:47:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:47:42 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Oct 10 09:47:42 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Oct 10 09:47:42 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:42 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:43 compute-1 ceph-mon[79167]: pgmap v58: 353 pgs: 62 unknown, 291 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:47:43 compute-1 ceph-mon[79167]: 8.15 scrub starts
Oct 10 09:47:43 compute-1 ceph-mon[79167]: 8.15 scrub ok
Oct 10 09:47:43 compute-1 ceph-mon[79167]: 7.a scrub starts
Oct 10 09:47:43 compute-1 ceph-mon[79167]: 7.a scrub ok
Oct 10 09:47:43 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:43 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:43 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:43 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:43 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:43 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:43 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Oct 10 09:47:43 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Oct 10 09:47:44 compute-1 ceph-mon[79167]: Deploying daemon alertmanager.compute-0 on compute-0
Oct 10 09:47:44 compute-1 ceph-mon[79167]: 8.3 scrub starts
Oct 10 09:47:44 compute-1 ceph-mon[79167]: 8.3 scrub ok
Oct 10 09:47:44 compute-1 ceph-mon[79167]: 7.13 scrub starts
Oct 10 09:47:44 compute-1 ceph-mon[79167]: 7.13 scrub ok
Oct 10 09:47:44 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:44 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.16 deep-scrub starts
Oct 10 09:47:44 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.16 deep-scrub ok
Oct 10 09:47:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:44 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388003e00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:45 compute-1 ceph-mon[79167]: pgmap v60: 353 pgs: 62 unknown, 291 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:47:45 compute-1 ceph-mon[79167]: 8.17 scrub starts
Oct 10 09:47:45 compute-1 ceph-mon[79167]: 8.17 scrub ok
Oct 10 09:47:45 compute-1 ceph-mon[79167]: 7.11 scrub starts
Oct 10 09:47:45 compute-1 ceph-mon[79167]: 7.11 scrub ok
Oct 10 09:47:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:45 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:45 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:45 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Oct 10 09:47:45 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Oct 10 09:47:46 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e61 e61: 3 total, 3 up, 3 in
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[11.1a( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[9.11( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[8.10( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[8.1b( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[11.7( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[8.4( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[11.4( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[11.5( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[9.a( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[8.8( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[9.f( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[11.f( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[9.d( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[11.1( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[11.12( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[9.10( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[11.14( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[8.17( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[9.15( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[8.14( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[8.19( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[11.1b( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.11( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.713502884s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.796554565s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.17( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.690283775s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 179.773452759s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.11( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.713395119s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.796554565s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.17( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.690241814s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.773452759s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.10( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.715754509s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.799255371s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.10( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.715714455s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.799255371s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.1b( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.682233810s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.765914917s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.1b( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.682155609s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.765914917s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.13( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.715379715s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.799240112s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.13( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.715338707s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.799240112s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.18( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.681708336s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.765869141s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.18( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.681674957s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.765869141s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[8.18( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[11.1c( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.12( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.714180946s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.799240112s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.12( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.714117050s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.799240112s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.15( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.687976837s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 179.773345947s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[11.1d( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.15( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.687910080s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.773345947s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[11.1e( empty local-lis/les=0/0 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[9.12( empty local-lis/les=0/0 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-1 ceph-mon[79167]: 8.8 scrub starts
Oct 10 09:47:46 compute-1 ceph-mon[79167]: 8.8 scrub ok
Oct 10 09:47:46 compute-1 ceph-mon[79167]: 7.16 deep-scrub starts
Oct 10 09:47:46 compute-1 ceph-mon[79167]: 7.16 deep-scrub ok
Oct 10 09:47:46 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 09:47:46 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 09:47:46 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 09:47:46 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[8.12( empty local-lis/les=0/0 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:46 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 10 09:47:46 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 09:47:46 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:46 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.1e( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.678252220s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.765853882s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.1e( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.678228378s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.765853882s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.13( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.685546875s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 179.773361206s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.13( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.685509682s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.773361206s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:46 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.f( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.678056717s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.765991211s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.f( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.678024292s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.765991211s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.1( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.679808617s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 179.768020630s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.1( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.679785728s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.768020630s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.7( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.711056709s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.799453735s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.7( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.711032867s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.799453735s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.6( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.710590363s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.799346924s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.6( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.710541725s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.799346924s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.4( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.710323334s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.799301147s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.9( v 60'1 (0'0,60'1] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.710341454s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=60'1 lcod 0'0 mlcod 0'0 active pruub 181.799392700s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.4( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.710278511s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.799301147s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.2( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.676536560s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.765686035s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.9( v 60'1 (0'0,60'1] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.710184097s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=60'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.799392700s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.2( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.676502228s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.765686035s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.3( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.675749779s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.765289307s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.678380966s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 179.767974854s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.3( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.675717354s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.765289307s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.8( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.709904671s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.799392700s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.678343773s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.767974854s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.a( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.709317207s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.799453735s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.8( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.709264755s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.799392700s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.a( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.709279060s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.799453735s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.c( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.709352493s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.799697876s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.c( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.709320068s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.799697876s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.b( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.709117889s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.799682617s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.b( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.709087372s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.799682617s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.d( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.676994324s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 179.767807007s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.d( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.676941872s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.767807007s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.e( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.708835602s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.799758911s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.e( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.708807945s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.799758911s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.9( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.676640511s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 179.767822266s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.5( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.673544884s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.764770508s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.9( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.676595688s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.767822266s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.5( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.673506737s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.764770508s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.b( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.676552773s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 179.767883301s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.6( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.674350739s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.765686035s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.b( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.676523209s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.767883301s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.6( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.674324036s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.765686035s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.e( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.673120499s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.764724731s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.3( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.675526619s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 179.767166138s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.e( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.673097610s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.764724731s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.3( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.675501823s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.767166138s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.2( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.707875252s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.799819946s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.2( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.707849503s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.799819946s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.9( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.672030449s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.764068604s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.8( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.671961784s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.764053345s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.9( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.672005653s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.764068604s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.5( v 59'1094 (0'0,59'1094] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.674842834s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=57'1092 lcod 59'1093 mlcod 59'1093 active pruub 179.767105103s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.8( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.671931267s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.764053345s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.5( v 59'1094 (0'0,59'1094] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.674800873s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=57'1092 lcod 59'1093 mlcod 0'0 unknown NOTIFY pruub 179.767105103s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.3( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.707437515s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.799835205s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.3( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.707401276s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.799835205s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.b( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.671418190s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.763946533s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.b( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.671397209s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.763946533s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.14( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.671131134s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.763946533s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.14( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.671073914s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.763946533s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.4( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.672192574s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.765106201s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.4( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.672147751s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.765106201s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.673967361s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 179.767059326s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.1c( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.706785202s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.799942017s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.673920631s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.767059326s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.1c( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.706767082s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.799942017s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.1a( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.706682205s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.799987793s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.1a( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.706666946s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.799987793s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.11( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.669240952s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.762603760s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.11( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.669212341s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.762603760s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.10( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.669639587s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.763214111s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.10( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.669616699s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.763214111s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.1d( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.673401833s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 179.767013550s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.1d( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.673379898s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.767013550s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.13( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.668558121s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.762359619s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.13( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.668535233s) [0] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.762359619s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.18( v 60'1 (0'0,60'1] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.706106186s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=60'1 lcod 0'0 mlcod 0'0 active pruub 181.800003052s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.18( v 60'1 (0'0,60'1] local-lis/les=58/59 n=1 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.706048965s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=60'1 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.800003052s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.19( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.706009865s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.800003052s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.19( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.705970764s) [0] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.800003052s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.672766685s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 179.766891479s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.672748566s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.766891479s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.1d( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.660766602s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.755142212s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.1d( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.660737991s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.755142212s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.1f( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.660465240s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.755126953s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.a( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.660358429s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.755111694s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.7( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.672463417s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 179.767181396s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.a( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.660330772s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.755111694s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.7( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.672400475s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.767181396s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.1e( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.705061913s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.800109863s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.1e( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.705037117s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.800109863s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.1d( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.704943657s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.800125122s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.1d( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.704910278s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.800125122s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.667339325s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 179.762603760s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.667314529s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.762603760s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.17( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.704752922s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 active pruub 181.800155640s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[12.17( empty local-lis/les=58/59 n=0 ec=58/49 lis/c=58/58 les/c/f=59/59/0 sis=61 pruub=10.704732895s) [2] r=-1 lpr=61 pi=[58,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 181.800155640s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.11( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.671610832s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 179.767150879s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[10.11( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=61 pruub=8.671586990s) [2] r=-1 lpr=61 pi=[56,61)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.767150879s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.1f( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.660412788s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.755126953s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.16( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.667587280s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 active pruub 186.762969971s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:46 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 61 pg[7.16( empty local-lis/les=54/56 n=0 ec=54/22 lis/c=54/54 les/c/f=56/56/0 sis=61 pruub=15.666921616s) [2] r=-1 lpr=61 pi=[54,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 186.762969971s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:46 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Oct 10 09:47:46 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Oct 10 09:47:46 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:46 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e62 e62: 3 total, 3 up, 3 in
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.17( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.17( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.15( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.15( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.13( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.13( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.1( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.1( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.9( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.9( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.d( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.d( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.b( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.b( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.3( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.3( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.5( v 59'1094 (0'0,59'1094] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=57'1092 lcod 59'1093 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.5( v 59'1094 (0'0,59'1094] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=57'1092 lcod 59'1093 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.1d( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.1d( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.7( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.7( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[8.14( v 51'44 (0'0,51'44] local-lis/les=61/62 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.11( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[9.15( v 44'6 (0'0,44'6] local-lis/les=61/62 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[10.11( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[9.10( v 44'6 (0'0,44'6] local-lis/les=61/62 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[8.17( v 51'44 (0'0,51'44] local-lis/les=61/62 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[11.12( v 48'48 (0'0,48'48] local-lis/les=61/62 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[11.1( v 48'48 (0'0,48'48] local-lis/les=61/62 n=1 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[9.d( v 44'6 (0'0,44'6] local-lis/les=61/62 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[11.14( v 60'57 lc 60'56 (0'0,60'57] local-lis/les=61/62 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=60'57 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[11.f( v 48'48 (0'0,48'48] local-lis/les=61/62 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[9.f( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=61/62 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=44'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[8.8( v 51'44 (0'0,51'44] local-lis/les=61/62 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[9.a( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=61/62 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=44'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[11.5( v 48'48 (0'0,48'48] local-lis/les=61/62 n=1 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[9.6( v 44'6 lc 0'0 (0'0,44'6] local-lis/les=61/62 n=1 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=44'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[9.e( v 44'6 (0'0,44'6] local-lis/les=61/62 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[11.4( v 48'48 (0'0,48'48] local-lis/les=61/62 n=1 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[11.7( v 48'48 (0'0,48'48] local-lis/les=61/62 n=1 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[8.4( v 51'44 (0'0,51'44] local-lis/les=61/62 n=1 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[8.1b( v 51'44 lc 51'8 (0'0,51'44] local-lis/les=61/62 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[8.18( v 51'44 lc 51'18 (0'0,51'44] local-lis/les=61/62 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[11.1b( v 48'48 (0'0,48'48] local-lis/les=61/62 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[11.1d( v 48'48 (0'0,48'48] local-lis/les=61/62 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[11.1c( v 48'48 (0'0,48'48] local-lis/les=61/62 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[8.12( v 51'44 (0'0,51'44] local-lis/les=61/62 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[8.10( v 57'47 lc 51'14 (0'0,57'47] local-lis/les=61/62 n=1 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=57'47 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[9.11( v 44'6 (0'0,44'6] local-lis/les=61/62 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[8.19( v 51'44 (0'0,51'44] local-lis/les=61/62 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=51'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[9.12( v 44'6 (0'0,44'6] local-lis/les=61/62 n=0 ec=56/43 lis/c=56/56 les/c/f=57/57/0 sis=61) [1] r=0 lpr=61 pi=[56,61)/1 crt=44'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[11.1a( v 48'48 (0'0,48'48] local-lis/les=61/62 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 62 pg[11.1e( v 48'48 (0'0,48'48] local-lis/les=61/62 n=0 ec=58/47 lis/c=58/58 les/c/f=60/60/0 sis=61) [1] r=0 lpr=61 pi=[58,61)/1 crt=48'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:47 compute-1 ceph-mon[79167]: 8.f deep-scrub starts
Oct 10 09:47:47 compute-1 ceph-mon[79167]: pgmap v61: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 2 op/s
Oct 10 09:47:47 compute-1 ceph-mon[79167]: 8.f deep-scrub ok
Oct 10 09:47:47 compute-1 ceph-mon[79167]: 7.15 scrub starts
Oct 10 09:47:47 compute-1 ceph-mon[79167]: 7.15 scrub ok
Oct 10 09:47:47 compute-1 ceph-mon[79167]: Regenerating cephadm self-signed grafana TLS certificates
Oct 10 09:47:47 compute-1 ceph-mon[79167]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Oct 10 09:47:47 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:47 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 09:47:47 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 09:47:47 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 09:47:47 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 09:47:47 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 10 09:47:47 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 09:47:47 compute-1 ceph-mon[79167]: osdmap e61: 3 total, 3 up, 3 in
Oct 10 09:47:47 compute-1 ceph-mon[79167]: Deploying daemon grafana.compute-0 on compute-0
Oct 10 09:47:47 compute-1 ceph-mon[79167]: 7.1a scrub starts
Oct 10 09:47:47 compute-1 ceph-mon[79167]: 7.1a scrub ok
Oct 10 09:47:47 compute-1 ceph-mon[79167]: osdmap e62: 3 total, 3 up, 3 in
Oct 10 09:47:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:47:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:47 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388003e00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:47 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:48 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e63 e63: 3 total, 3 up, 3 in
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.16( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=14.684146881s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 187.773544312s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.16( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=14.684079170s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.773544312s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.2( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=14.678178787s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 187.768295288s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.2( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=14.678135872s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.768295288s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.e( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=14.677606583s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 187.768310547s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.e( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=14.677490234s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.768310547s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.a( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=14.682567596s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 187.773513794s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.a( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=14.682549477s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.773513794s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.6( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=14.674398422s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 187.767364502s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.6( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=14.674337387s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.767364502s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.1a( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=14.673968315s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 187.767318726s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.1a( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=14.673941612s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.767318726s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=14.673541069s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 187.767242432s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=14.673457146s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.767242432s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.12( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=14.672980309s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 187.767227173s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.12( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=63 pruub=14.672939301s) [0] r=-1 lpr=63 pi=[56,63)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 187.767227173s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.17( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.1( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.13( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.b( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.3( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.15( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.d( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.5( v 59'1094 (0'0,59'1094] local-lis/les=62/63 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[56,62)/1 crt=59'1094 lcod 59'1093 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.9( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.7( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.11( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 63 pg[10.1d( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[56,62)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:48 compute-1 ceph-mon[79167]: 9.14 scrub starts
Oct 10 09:47:48 compute-1 ceph-mon[79167]: 9.14 scrub ok
Oct 10 09:47:48 compute-1 ceph-mon[79167]: pgmap v64: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 2 op/s
Oct 10 09:47:48 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 10 09:47:48 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 10 09:47:48 compute-1 ceph-mon[79167]: osdmap e63: 3 total, 3 up, 3 in
Oct 10 09:47:48 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e64 e64: 3 total, 3 up, 3 in
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.1a( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.12( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.1a( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.12( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.e( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.e( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.6( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.2( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.6( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.2( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.16( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.16( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.17( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=64 pruub=15.702158928s) [2] async=[2] r=-1 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 189.096115112s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.17( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=64 pruub=15.702087402s) [2] r=-1 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.096115112s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.a( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.a( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.1( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=64 pruub=15.708526611s) [2] async=[2] r=-1 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 189.103134155s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:48 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 64 pg[10.1( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=64 pruub=15.708438873s) [2] r=-1 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.103134155s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:48 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:49 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:49 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388003e00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:49 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e65 e65: 3 total, 3 up, 3 in
Oct 10 09:47:49 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.15( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.466288567s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 189.103683472s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:49 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.15( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.466186523s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.103683472s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:49 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.7( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.470971107s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 189.109085083s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:49 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.470538139s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 189.108795166s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:49 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.470425606s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.108795166s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:49 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.7( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.470301628s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.109085083s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:49 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.469999313s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 189.109008789s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:49 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.469942093s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.109008789s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:49 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.464769363s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 189.103912354s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:49 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.13( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.463806152s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 189.103195190s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:49 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.9( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.464865685s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 189.104278564s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:49 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.464645386s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.103912354s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:49 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.13( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.463734627s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.103195190s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:49 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.9( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.464756966s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.104278564s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:49 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.469159126s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 189.108825684s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:49 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.468850136s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.108825684s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:49 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.b( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.463052750s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 189.103561401s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:49 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.b( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.463002205s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.103561401s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:49 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.5( v 64'1098 (0'0,64'1098] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.463479042s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=59'1094 lcod 64'1097 mlcod 64'1097 active pruub 189.104110718s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:49 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.d( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.463605881s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 189.104019165s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:49 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.5( v 64'1098 (0'0,64'1098] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.463342667s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=59'1094 lcod 64'1097 mlcod 0'0 unknown NOTIFY pruub 189.104110718s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:49 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.d( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.463195801s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.104019165s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:49 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.3( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.462391853s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 189.103607178s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:49 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.3( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.462322235s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.103607178s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:49 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.1d( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.466937065s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 189.109222412s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:49 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.1d( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=5 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.466878891s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.109222412s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:49 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.11( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.466481209s) [2] async=[2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 189.109191895s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:49 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.11( v 51'1091 (0'0,51'1091] local-lis/les=62/63 n=6 ec=56/45 lis/c=62/56 les/c/f=63/57/0 sis=65 pruub=14.466403961s) [2] r=-1 lpr=65 pi=[56,65)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.109191895s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:49 compute-1 ceph-mon[79167]: 11.15 scrub starts
Oct 10 09:47:49 compute-1 ceph-mon[79167]: 12.13 deep-scrub starts
Oct 10 09:47:49 compute-1 ceph-mon[79167]: 11.15 scrub ok
Oct 10 09:47:49 compute-1 ceph-mon[79167]: 12.13 deep-scrub ok
Oct 10 09:47:49 compute-1 ceph-mon[79167]: osdmap e64: 3 total, 3 up, 3 in
Oct 10 09:47:49 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:49 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.2( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:50 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.1a( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:50 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.e( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:50 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.6( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:50 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.12( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:50 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.16( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:50 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.a( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:50 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 65 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=64) [0]/[1] async=[0] r=0 lpr=64 pi=[56,64)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:50 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e66 e66: 3 total, 3 up, 3 in
Oct 10 09:47:50 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 66 pg[10.16( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=5 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=15.285489082s) [0] async=[0] r=-1 lpr=66 pi=[56,66)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 190.961334229s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:50 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 66 pg[10.16( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=5 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=15.285408020s) [0] r=-1 lpr=66 pi=[56,66)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.961334229s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:50 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 66 pg[10.2( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=15.281500816s) [0] async=[0] r=-1 lpr=66 pi=[56,66)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 190.957702637s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:50 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 66 pg[10.2( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=15.281429291s) [0] r=-1 lpr=66 pi=[56,66)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.957702637s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:50 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 66 pg[10.e( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=15.284450531s) [0] async=[0] r=-1 lpr=66 pi=[56,66)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 190.960998535s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:50 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 66 pg[10.e( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=15.284386635s) [0] r=-1 lpr=66 pi=[56,66)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.960998535s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:50 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 66 pg[10.a( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=15.284519196s) [0] async=[0] r=-1 lpr=66 pi=[56,66)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 190.961502075s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:50 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 66 pg[10.a( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=15.284316063s) [0] r=-1 lpr=66 pi=[56,66)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.961502075s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:50 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 66 pg[10.6( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=15.283745766s) [0] async=[0] r=-1 lpr=66 pi=[56,66)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 190.961090088s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:50 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 66 pg[10.6( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=15.283687592s) [0] r=-1 lpr=66 pi=[56,66)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.961090088s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:50 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 66 pg[10.1a( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=5 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=15.283139229s) [0] async=[0] r=-1 lpr=66 pi=[56,66)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 190.960800171s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:50 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 66 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=5 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=15.283920288s) [0] async=[0] r=-1 lpr=66 pi=[56,66)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 190.961593628s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:50 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 66 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=5 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=15.283873558s) [0] r=-1 lpr=66 pi=[56,66)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.961593628s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:50 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 66 pg[10.1a( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=5 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=15.283022881s) [0] r=-1 lpr=66 pi=[56,66)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.960800171s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:50 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 66 pg[10.12( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=15.283117294s) [0] async=[0] r=-1 lpr=66 pi=[56,66)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 190.961242676s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:50 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 66 pg[10.12( v 51'1091 (0'0,51'1091] local-lis/les=64/65 n=6 ec=56/45 lis/c=64/56 les/c/f=65/57/0 sis=66 pruub=15.283035278s) [0] r=-1 lpr=66 pi=[56,66)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.961242676s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:50 compute-1 ceph-mon[79167]: 9.2 scrub starts
Oct 10 09:47:50 compute-1 ceph-mon[79167]: 9.2 scrub ok
Oct 10 09:47:50 compute-1 ceph-mon[79167]: pgmap v67: 353 pgs: 2 peering, 351 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 102 B/s, 4 objects/s recovering
Oct 10 09:47:50 compute-1 ceph-mon[79167]: 11.0 scrub starts
Oct 10 09:47:50 compute-1 ceph-mon[79167]: 11.0 scrub ok
Oct 10 09:47:50 compute-1 ceph-mon[79167]: osdmap e65: 3 total, 3 up, 3 in
Oct 10 09:47:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:50 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:51 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:51 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c000b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:51 compute-1 ceph-mon[79167]: 10.17 scrub starts
Oct 10 09:47:51 compute-1 ceph-mon[79167]: 10.17 scrub ok
Oct 10 09:47:51 compute-1 ceph-mon[79167]: 11.c scrub starts
Oct 10 09:47:51 compute-1 ceph-mon[79167]: 11.c scrub ok
Oct 10 09:47:51 compute-1 ceph-mon[79167]: osdmap e66: 3 total, 3 up, 3 in
Oct 10 09:47:51 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e67 e67: 3 total, 3 up, 3 in
Oct 10 09:47:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:47:52 compute-1 ceph-mon[79167]: 10.1 deep-scrub starts
Oct 10 09:47:52 compute-1 ceph-mon[79167]: 10.1 deep-scrub ok
Oct 10 09:47:52 compute-1 ceph-mon[79167]: pgmap v70: 353 pgs: 8 remapped+peering, 14 active+remapped, 2 peering, 329 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 806 B/s, 25 objects/s recovering
Oct 10 09:47:52 compute-1 ceph-mon[79167]: 11.b deep-scrub starts
Oct 10 09:47:52 compute-1 ceph-mon[79167]: 11.b deep-scrub ok
Oct 10 09:47:52 compute-1 ceph-mon[79167]: osdmap e67: 3 total, 3 up, 3 in
Oct 10 09:47:52 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:52 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:53 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:53 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:53 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Oct 10 09:47:53 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Oct 10 09:47:53 compute-1 ceph-mon[79167]: 10.1f scrub starts
Oct 10 09:47:53 compute-1 ceph-mon[79167]: 10.1f scrub ok
Oct 10 09:47:53 compute-1 ceph-mon[79167]: 11.9 deep-scrub starts
Oct 10 09:47:53 compute-1 ceph-mon[79167]: 11.9 deep-scrub ok
Oct 10 09:47:54 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.15 scrub starts
Oct 10 09:47:54 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.15 scrub ok
Oct 10 09:47:54 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:54 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c0016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:54 compute-1 ceph-mon[79167]: 10.7 scrub starts
Oct 10 09:47:54 compute-1 ceph-mon[79167]: 10.7 scrub ok
Oct 10 09:47:54 compute-1 ceph-mon[79167]: pgmap v72: 353 pgs: 8 remapped+peering, 14 active+remapped, 2 peering, 329 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 632 B/s, 20 objects/s recovering
Oct 10 09:47:54 compute-1 ceph-mon[79167]: 11.d deep-scrub starts
Oct 10 09:47:54 compute-1 ceph-mon[79167]: 11.d deep-scrub ok
Oct 10 09:47:54 compute-1 ceph-mon[79167]: 7.19 scrub starts
Oct 10 09:47:54 compute-1 ceph-mon[79167]: 7.19 scrub ok
Oct 10 09:47:54 compute-1 ceph-mon[79167]: 10.1b scrub starts
Oct 10 09:47:54 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:54 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:54 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:54 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:54 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:54 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:54 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Oct 10 09:47:54 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:54.933684) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 09:47:54 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Oct 10 09:47:54 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089674933710, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1077, "num_deletes": 251, "total_data_size": 1849896, "memory_usage": 1870896, "flush_reason": "Manual Compaction"}
Oct 10 09:47:54 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Oct 10 09:47:54 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089674941783, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 1187808, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6481, "largest_seqno": 7553, "table_properties": {"data_size": 1182537, "index_size": 2603, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13502, "raw_average_key_size": 21, "raw_value_size": 1171020, "raw_average_value_size": 1847, "num_data_blocks": 115, "num_entries": 634, "num_filter_entries": 634, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089648, "oldest_key_time": 1760089648, "file_creation_time": 1760089674, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Oct 10 09:47:54 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 8319 microseconds, and 5075 cpu microseconds.
Oct 10 09:47:54 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 09:47:54 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:54.942000) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 1187808 bytes OK
Oct 10 09:47:54 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:54.942018) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Oct 10 09:47:54 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:54.943433) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Oct 10 09:47:54 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:54.943458) EVENT_LOG_v1 {"time_micros": 1760089674943450, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 09:47:54 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:54.943477) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 09:47:54 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 1844134, prev total WAL file size 1844134, number of live WAL files 2.
Oct 10 09:47:54 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 09:47:54 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:54.944602) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Oct 10 09:47:54 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 09:47:54 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(1159KB)], [15(10MB)]
Oct 10 09:47:54 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089674944638, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 12571234, "oldest_snapshot_seqno": -1}
Oct 10 09:47:54 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 3078 keys, 11335147 bytes, temperature: kUnknown
Oct 10 09:47:54 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089674998860, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 11335147, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11310656, "index_size": 15678, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7749, "raw_key_size": 79237, "raw_average_key_size": 25, "raw_value_size": 11249999, "raw_average_value_size": 3654, "num_data_blocks": 685, "num_entries": 3078, "num_filter_entries": 3078, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760089674, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Oct 10 09:47:54 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 09:47:55 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:54.999086) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 11335147 bytes
Oct 10 09:47:55 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:55.000310) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 231.5 rd, 208.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 10.9 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(20.1) write-amplify(9.5) OK, records in: 3605, records dropped: 527 output_compression: NoCompression
Oct 10 09:47:55 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:55.000341) EVENT_LOG_v1 {"time_micros": 1760089675000333, "job": 6, "event": "compaction_finished", "compaction_time_micros": 54297, "compaction_time_cpu_micros": 28389, "output_level": 6, "num_output_files": 1, "total_output_size": 11335147, "num_input_records": 3605, "num_output_records": 3078, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 09:47:55 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 09:47:55 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089675000614, "job": 6, "event": "table_file_deletion", "file_number": 17}
Oct 10 09:47:55 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 09:47:55 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089675002482, "job": 6, "event": "table_file_deletion", "file_number": 15}
Oct 10 09:47:55 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:54.944480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:47:55 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:55.002508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:47:55 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:55.002511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:47:55 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:55.002512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:47:55 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:55.002513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:47:55 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:47:55.002515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:47:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:55 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:55 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:55 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.c scrub starts
Oct 10 09:47:55 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.c scrub ok
Oct 10 09:47:56 compute-1 ceph-mon[79167]: 10.1b scrub ok
Oct 10 09:47:56 compute-1 ceph-mon[79167]: Deploying daemon haproxy.rgw.default.compute-0.ofnenu on compute-0
Oct 10 09:47:56 compute-1 ceph-mon[79167]: 8.e scrub starts
Oct 10 09:47:56 compute-1 ceph-mon[79167]: 8.e scrub ok
Oct 10 09:47:56 compute-1 ceph-mon[79167]: 12.15 scrub starts
Oct 10 09:47:56 compute-1 ceph-mon[79167]: 12.15 scrub ok
Oct 10 09:47:56 compute-1 ceph-mon[79167]: 12.7 deep-scrub starts
Oct 10 09:47:56 compute-1 ceph-mon[79167]: 12.7 deep-scrub ok
Oct 10 09:47:56 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 10 09:47:56 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e68 e68: 3 total, 3 up, 3 in
Oct 10 09:47:56 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.d scrub starts
Oct 10 09:47:56 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.d scrub ok
Oct 10 09:47:56 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:56 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:57 compute-1 ceph-mon[79167]: pgmap v73: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 469 B/s, 17 objects/s recovering
Oct 10 09:47:57 compute-1 ceph-mon[79167]: 9.c scrub starts
Oct 10 09:47:57 compute-1 ceph-mon[79167]: 9.c scrub ok
Oct 10 09:47:57 compute-1 ceph-mon[79167]: 7.c scrub starts
Oct 10 09:47:57 compute-1 ceph-mon[79167]: 7.c scrub ok
Oct 10 09:47:57 compute-1 ceph-mon[79167]: 12.4 scrub starts
Oct 10 09:47:57 compute-1 ceph-mon[79167]: 12.4 scrub ok
Oct 10 09:47:57 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 10 09:47:57 compute-1 ceph-mon[79167]: osdmap e68: 3 total, 3 up, 3 in
Oct 10 09:47:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:47:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:57 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c0016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:57 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:57 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Oct 10 09:47:57 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Oct 10 09:47:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:47:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.003000067s ======
Oct 10 09:47:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:47:57.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000067s
Oct 10 09:47:58 compute-1 ceph-mon[79167]: 11.2 scrub starts
Oct 10 09:47:58 compute-1 ceph-mon[79167]: 11.2 scrub ok
Oct 10 09:47:58 compute-1 ceph-mon[79167]: 7.d scrub starts
Oct 10 09:47:58 compute-1 ceph-mon[79167]: 7.d scrub ok
Oct 10 09:47:58 compute-1 ceph-mon[79167]: 8.c scrub starts
Oct 10 09:47:58 compute-1 ceph-mon[79167]: 8.c scrub ok
Oct 10 09:47:58 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:58 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:58 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:58 compute-1 ceph-mon[79167]: Deploying daemon haproxy.rgw.default.compute-2.mhdkdo on compute-2
Oct 10 09:47:58 compute-1 ceph-mon[79167]: pgmap v75: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 2 objects/s recovering
Oct 10 09:47:58 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 10 09:47:58 compute-1 ceph-mon[79167]: 7.1 scrub starts
Oct 10 09:47:58 compute-1 ceph-mon[79167]: 7.1 scrub ok
Oct 10 09:47:58 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e69 e69: 3 total, 3 up, 3 in
Oct 10 09:47:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 69 pg[10.14( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=69 pruub=12.484390259s) [2] r=-1 lpr=69 pi=[56,69)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 195.768478394s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 69 pg[10.14( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=69 pruub=12.484338760s) [2] r=-1 lpr=69 pi=[56,69)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.768478394s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 69 pg[10.c( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=69 pruub=12.484121323s) [2] r=-1 lpr=69 pi=[56,69)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 195.768478394s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 69 pg[10.c( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=69 pruub=12.484080315s) [2] r=-1 lpr=69 pi=[56,69)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.768478394s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 69 pg[10.4( v 60'1098 (0'0,60'1098] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=69 pruub=12.482593536s) [2] r=-1 lpr=69 pi=[56,69)/1 crt=60'1098 lcod 60'1097 mlcod 60'1097 active pruub 195.767684937s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 69 pg[10.4( v 60'1098 (0'0,60'1098] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=69 pruub=12.482537270s) [2] r=-1 lpr=69 pi=[56,69)/1 crt=60'1098 lcod 60'1097 mlcod 0'0 unknown NOTIFY pruub 195.767684937s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 69 pg[10.1c( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=69 pruub=12.482146263s) [2] r=-1 lpr=69 pi=[56,69)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 195.767547607s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 69 pg[10.1c( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=69 pruub=12.482050896s) [2] r=-1 lpr=69 pi=[56,69)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.767547607s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:47:58 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e70 e70: 3 total, 3 up, 3 in
Oct 10 09:47:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 70 pg[10.1c( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=70) [2]/[1] r=0 lpr=70 pi=[56,70)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 70 pg[10.1c( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=70) [2]/[1] r=0 lpr=70 pi=[56,70)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 70 pg[10.c( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=70) [2]/[1] r=0 lpr=70 pi=[56,70)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 70 pg[10.c( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=70) [2]/[1] r=0 lpr=70 pi=[56,70)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 70 pg[10.4( v 60'1098 (0'0,60'1098] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=70) [2]/[1] r=0 lpr=70 pi=[56,70)/1 crt=60'1098 lcod 60'1097 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 70 pg[10.4( v 60'1098 (0'0,60'1098] local-lis/les=56/57 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=70) [2]/[1] r=0 lpr=70 pi=[56,70)/1 crt=60'1098 lcod 60'1097 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 70 pg[10.14( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=70) [2]/[1] r=0 lpr=70 pi=[56,70)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:47:58 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 70 pg[10.14( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=70) [2]/[1] r=0 lpr=70 pi=[56,70)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:47:58 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Oct 10 09:47:58 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Oct 10 09:47:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:58 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:59 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:59 compute-1 ceph-mon[79167]: 8.1 deep-scrub starts
Oct 10 09:47:59 compute-1 ceph-mon[79167]: 8.1 deep-scrub ok
Oct 10 09:47:59 compute-1 ceph-mon[79167]: 12.11 scrub starts
Oct 10 09:47:59 compute-1 ceph-mon[79167]: 12.11 scrub ok
Oct 10 09:47:59 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 10 09:47:59 compute-1 ceph-mon[79167]: osdmap e69: 3 total, 3 up, 3 in
Oct 10 09:47:59 compute-1 ceph-mon[79167]: osdmap e70: 3 total, 3 up, 3 in
Oct 10 09:47:59 compute-1 ceph-mon[79167]: 7.7 scrub starts
Oct 10 09:47:59 compute-1 ceph-mon[79167]: 7.7 scrub ok
Oct 10 09:47:59 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:59 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:59 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:59 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:59 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:47:59 compute-1 ceph-mon[79167]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 10 09:47:59 compute-1 ceph-mon[79167]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 10 09:47:59 compute-1 ceph-mon[79167]: Deploying daemon keepalived.rgw.default.compute-2.bbeizy on compute-2
Oct 10 09:47:59 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e71 e71: 3 total, 3 up, 3 in
Oct 10 09:47:59 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 71 pg[10.14( v 51'1091 (0'0,51'1091] local-lis/les=70/71 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=70) [2]/[1] async=[2] r=0 lpr=70 pi=[56,70)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:59 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 71 pg[10.c( v 51'1091 (0'0,51'1091] local-lis/les=70/71 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=70) [2]/[1] async=[2] r=0 lpr=70 pi=[56,70)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:59 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 71 pg[10.4( v 60'1098 (0'0,60'1098] local-lis/les=70/71 n=6 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=70) [2]/[1] async=[2] r=0 lpr=70 pi=[56,70)/1 crt=60'1098 lcod 60'1097 mlcod 0'0 active+remapped mbc={255={(0+1)=10}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:59 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 71 pg[10.1c( v 51'1091 (0'0,51'1091] local-lis/les=70/71 n=5 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=70) [2]/[1] async=[2] r=0 lpr=70 pi=[56,70)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:47:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:47:59 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c0016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:47:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:47:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:47:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:47:59.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:47:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:47:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:47:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:47:59.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:00 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e72 e72: 3 total, 3 up, 3 in
Oct 10 09:48:00 compute-1 ceph-mon[79167]: 9.0 scrub starts
Oct 10 09:48:00 compute-1 ceph-mon[79167]: 9.0 scrub ok
Oct 10 09:48:00 compute-1 ceph-mon[79167]: 7.14 scrub starts
Oct 10 09:48:00 compute-1 ceph-mon[79167]: 7.14 scrub ok
Oct 10 09:48:00 compute-1 ceph-mon[79167]: osdmap e71: 3 total, 3 up, 3 in
Oct 10 09:48:00 compute-1 ceph-mon[79167]: pgmap v79: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:00 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 10 09:48:00 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 72 pg[10.4( v 71'1102 (0'0,71'1102] local-lis/les=70/71 n=6 ec=56/45 lis/c=70/56 les/c/f=71/57/0 sis=72 pruub=14.967370987s) [2] async=[2] r=-1 lpr=72 pi=[56,72)/1 crt=60'1098 lcod 71'1101 mlcod 71'1101 active pruub 200.424423218s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:00 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 72 pg[10.c( v 51'1091 (0'0,51'1091] local-lis/les=70/71 n=6 ec=56/45 lis/c=70/56 les/c/f=71/57/0 sis=72 pruub=14.967158318s) [2] async=[2] r=-1 lpr=72 pi=[56,72)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 200.424346924s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:00 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 72 pg[10.4( v 71'1102 (0'0,71'1102] local-lis/les=70/71 n=6 ec=56/45 lis/c=70/56 les/c/f=71/57/0 sis=72 pruub=14.967208862s) [2] r=-1 lpr=72 pi=[56,72)/1 crt=60'1098 lcod 71'1101 mlcod 0'0 unknown NOTIFY pruub 200.424423218s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:00 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 72 pg[10.c( v 51'1091 (0'0,51'1091] local-lis/les=70/71 n=6 ec=56/45 lis/c=70/56 les/c/f=71/57/0 sis=72 pruub=14.967059135s) [2] r=-1 lpr=72 pi=[56,72)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 200.424346924s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:00 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 72 pg[10.1c( v 51'1091 (0'0,51'1091] local-lis/les=70/71 n=5 ec=56/45 lis/c=70/56 les/c/f=71/57/0 sis=72 pruub=14.966670990s) [2] async=[2] r=-1 lpr=72 pi=[56,72)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 200.424346924s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:00 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 72 pg[10.14( v 51'1091 (0'0,51'1091] local-lis/les=70/71 n=5 ec=56/45 lis/c=70/56 les/c/f=71/57/0 sis=72 pruub=14.961228371s) [2] async=[2] r=-1 lpr=72 pi=[56,72)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 200.419113159s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:00 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 72 pg[10.1c( v 51'1091 (0'0,51'1091] local-lis/les=70/71 n=5 ec=56/45 lis/c=70/56 les/c/f=71/57/0 sis=72 pruub=14.966114044s) [2] r=-1 lpr=72 pi=[56,72)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 200.424346924s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:00 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 72 pg[10.14( v 51'1091 (0'0,51'1091] local-lis/les=70/71 n=5 ec=56/45 lis/c=70/56 les/c/f=71/57/0 sis=72 pruub=14.960394859s) [2] r=-1 lpr=72 pi=[56,72)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 200.419113159s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:00 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Oct 10 09:48:00 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Oct 10 09:48:00 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:00 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:01 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:01 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:01 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e73 e73: 3 total, 3 up, 3 in
Oct 10 09:48:01 compute-1 ceph-mon[79167]: 8.0 scrub starts
Oct 10 09:48:01 compute-1 ceph-mon[79167]: 8.0 scrub ok
Oct 10 09:48:01 compute-1 ceph-mon[79167]: 12.1d scrub starts
Oct 10 09:48:01 compute-1 ceph-mon[79167]: 12.1d scrub ok
Oct 10 09:48:01 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 10 09:48:01 compute-1 ceph-mon[79167]: osdmap e72: 3 total, 3 up, 3 in
Oct 10 09:48:01 compute-1 ceph-mon[79167]: 10.0 scrub starts
Oct 10 09:48:01 compute-1 ceph-mon[79167]: 10.0 scrub ok
Oct 10 09:48:01 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:01 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:01 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:01 compute-1 ceph-mon[79167]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 10 09:48:01 compute-1 ceph-mon[79167]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 10 09:48:01 compute-1 ceph-mon[79167]: Deploying daemon keepalived.rgw.default.compute-0.igkrok on compute-0
Oct 10 09:48:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:01.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:01 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Oct 10 09:48:01 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Oct 10 09:48:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:01.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:48:02 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Oct 10 09:48:02 compute-1 ceph-mon[79167]: 9.1 scrub starts
Oct 10 09:48:02 compute-1 ceph-mon[79167]: 9.1 scrub ok
Oct 10 09:48:02 compute-1 ceph-mon[79167]: 9.5 scrub starts
Oct 10 09:48:02 compute-1 ceph-mon[79167]: 9.5 scrub ok
Oct 10 09:48:02 compute-1 ceph-mon[79167]: osdmap e73: 3 total, 3 up, 3 in
Oct 10 09:48:02 compute-1 ceph-mon[79167]: pgmap v82: 353 pgs: 1 active+remapped, 1 active+recovering+remapped, 2 active+recovery_wait+remapped, 349 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 19/219 objects misplaced (8.676%); 0 B/s, 2 objects/s recovering
Oct 10 09:48:02 compute-1 ceph-mon[79167]: 8.7 scrub starts
Oct 10 09:48:02 compute-1 ceph-mon[79167]: 8.7 scrub ok
Oct 10 09:48:02 compute-1 ceph-mon[79167]: 10.8 scrub starts
Oct 10 09:48:02 compute-1 ceph-mon[79167]: 10.8 scrub ok
Oct 10 09:48:02 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Oct 10 09:48:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e74 e74: 3 total, 3 up, 3 in
Oct 10 09:48:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:02 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:03 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:03 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e75 e75: 3 total, 3 up, 3 in
Oct 10 09:48:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:03 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:03 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Oct 10 09:48:03 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Oct 10 09:48:03 compute-1 ceph-mon[79167]: 9.18 scrub starts
Oct 10 09:48:03 compute-1 ceph-mon[79167]: 9.18 scrub ok
Oct 10 09:48:03 compute-1 ceph-mon[79167]: 10.10 scrub starts
Oct 10 09:48:03 compute-1 ceph-mon[79167]: 10.10 scrub ok
Oct 10 09:48:03 compute-1 ceph-mon[79167]: osdmap e74: 3 total, 3 up, 3 in
Oct 10 09:48:03 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:03 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:03 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:03 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:03 compute-1 ceph-mon[79167]: osdmap e75: 3 total, 3 up, 3 in
Oct 10 09:48:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:03.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:03.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:04 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e76 e76: 3 total, 3 up, 3 in
Oct 10 09:48:04 compute-1 ceph-mon[79167]: 11.19 scrub starts
Oct 10 09:48:04 compute-1 ceph-mon[79167]: 11.19 scrub ok
Oct 10 09:48:04 compute-1 ceph-mon[79167]: Deploying daemon prometheus.compute-0 on compute-0
Oct 10 09:48:04 compute-1 ceph-mon[79167]: 10.18 scrub starts
Oct 10 09:48:04 compute-1 ceph-mon[79167]: 10.18 scrub ok
Oct 10 09:48:04 compute-1 ceph-mon[79167]: pgmap v85: 353 pgs: 1 active+remapped, 1 active+recovering+remapped, 2 active+recovery_wait+remapped, 349 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 19/219 objects misplaced (8.676%); 0 B/s, 1 objects/s recovering
Oct 10 09:48:04 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:04 compute-1 ceph-mon[79167]: osdmap e76: 3 total, 3 up, 3 in
Oct 10 09:48:04 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:04 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:05 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:05 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:05 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.f deep-scrub starts
Oct 10 09:48:05 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.f deep-scrub ok
Oct 10 09:48:05 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e77 e77: 3 total, 3 up, 3 in
Oct 10 09:48:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:05.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:05 compute-1 ceph-mon[79167]: 8.5 scrub starts
Oct 10 09:48:05 compute-1 ceph-mon[79167]: 8.5 scrub ok
Oct 10 09:48:05 compute-1 ceph-mon[79167]: 10.15 scrub starts
Oct 10 09:48:05 compute-1 ceph-mon[79167]: 10.15 scrub ok
Oct 10 09:48:05 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 10 09:48:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:05.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:06 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 77 pg[10.16( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=77) [1] r=0 lpr=77 pi=[66,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:06 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 77 pg[10.e( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=77) [1] r=0 lpr=77 pi=[66,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:06 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 77 pg[10.6( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=77) [1] r=0 lpr=77 pi=[66,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:06 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 77 pg[10.1e( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=77) [1] r=0 lpr=77 pi=[66,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:06 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Oct 10 09:48:06 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Oct 10 09:48:06 compute-1 ceph-mon[79167]: 9.9 scrub starts
Oct 10 09:48:06 compute-1 ceph-mon[79167]: 9.9 scrub ok
Oct 10 09:48:06 compute-1 ceph-mon[79167]: 12.f deep-scrub starts
Oct 10 09:48:06 compute-1 ceph-mon[79167]: 12.f deep-scrub ok
Oct 10 09:48:06 compute-1 ceph-mon[79167]: pgmap v87: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 262 B/s, 10 objects/s recovering
Oct 10 09:48:06 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 10 09:48:06 compute-1 ceph-mon[79167]: osdmap e77: 3 total, 3 up, 3 in
Oct 10 09:48:06 compute-1 ceph-mon[79167]: 10.d deep-scrub starts
Oct 10 09:48:06 compute-1 ceph-mon[79167]: 10.d deep-scrub ok
Oct 10 09:48:06 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e78 e78: 3 total, 3 up, 3 in
Oct 10 09:48:06 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 78 pg[10.16( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[66,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:06 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 78 pg[10.16( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[66,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:06 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 78 pg[10.e( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[66,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:06 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 78 pg[10.e( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[66,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:06 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 78 pg[10.6( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[66,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:06 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 78 pg[10.6( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[66,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:06 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 78 pg[10.1e( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[66,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:06 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 78 pg[10.1e( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=78) [1]/[0] r=-1 lpr=78 pi=[66,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:06 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:48:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:07 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:07 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.d scrub starts
Oct 10 09:48:07 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.d scrub ok
Oct 10 09:48:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:07 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:48:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:07.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:48:07 compute-1 ceph-mon[79167]: 7.5 scrub starts
Oct 10 09:48:07 compute-1 ceph-mon[79167]: 7.5 scrub ok
Oct 10 09:48:07 compute-1 ceph-mon[79167]: 7.0 scrub starts
Oct 10 09:48:07 compute-1 ceph-mon[79167]: 7.0 scrub ok
Oct 10 09:48:07 compute-1 ceph-mon[79167]: 10.12 scrub starts
Oct 10 09:48:07 compute-1 ceph-mon[79167]: osdmap e78: 3 total, 3 up, 3 in
Oct 10 09:48:07 compute-1 ceph-mon[79167]: 10.12 scrub ok
Oct 10 09:48:07 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 10 09:48:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e79 e79: 3 total, 3 up, 3 in
Oct 10 09:48:07 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 79 pg[10.f( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=79) [1] r=0 lpr=79 pi=[65,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:07 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 79 pg[10.1f( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=79) [1] r=0 lpr=79 pi=[65,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:07 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 79 pg[10.17( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=64/64 les/c/f=65/65/0 sis=79) [1] r=0 lpr=79 pi=[64,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:07 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 79 pg[10.7( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=79) [1] r=0 lpr=79 pi=[65,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:07 compute-1 sshd-session[86062]: Accepted publickey for zuul from 192.168.122.30 port 50166 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:48:07 compute-1 systemd-logind[789]: New session 37 of user zuul.
Oct 10 09:48:07 compute-1 systemd[1]: Started Session 37 of User zuul.
Oct 10 09:48:07 compute-1 sshd-session[86062]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:48:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:07.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:08 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e80 e80: 3 total, 3 up, 3 in
Oct 10 09:48:08 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 80 pg[10.6( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:08 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 80 pg[10.6( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:08 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 80 pg[10.e( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:08 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 80 pg[10.17( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=64/64 les/c/f=65/65/0 sis=80) [1]/[2] r=-1 lpr=80 pi=[64,80)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:08 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 80 pg[10.e( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:08 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 80 pg[10.17( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=64/64 les/c/f=65/65/0 sis=80) [1]/[2] r=-1 lpr=80 pi=[64,80)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:08 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 80 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:08 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 80 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:08 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 80 pg[10.f( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=80) [1]/[2] r=-1 lpr=80 pi=[65,80)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:08 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 80 pg[10.f( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=80) [1]/[2] r=-1 lpr=80 pi=[65,80)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:08 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 80 pg[10.16( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:08 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 80 pg[10.16( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:08 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 80 pg[10.1f( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=80) [1]/[2] r=-1 lpr=80 pi=[65,80)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:08 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 80 pg[10.1f( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=80) [1]/[2] r=-1 lpr=80 pi=[65,80)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:08 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 80 pg[10.7( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=80) [1]/[2] r=-1 lpr=80 pi=[65,80)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:08 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 80 pg[10.7( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=80) [1]/[2] r=-1 lpr=80 pi=[65,80)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:08 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.5 scrub starts
Oct 10 09:48:08 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.5 scrub ok
Oct 10 09:48:08 compute-1 ceph-mon[79167]: 12.2 deep-scrub starts
Oct 10 09:48:08 compute-1 ceph-mon[79167]: 12.2 deep-scrub ok
Oct 10 09:48:08 compute-1 ceph-mon[79167]: 12.d scrub starts
Oct 10 09:48:08 compute-1 ceph-mon[79167]: 12.d scrub ok
Oct 10 09:48:08 compute-1 ceph-mon[79167]: pgmap v90: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 258 B/s, 10 objects/s recovering
Oct 10 09:48:08 compute-1 ceph-mon[79167]: 9.4 scrub starts
Oct 10 09:48:08 compute-1 ceph-mon[79167]: 9.4 scrub ok
Oct 10 09:48:08 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 10 09:48:08 compute-1 ceph-mon[79167]: osdmap e79: 3 total, 3 up, 3 in
Oct 10 09:48:08 compute-1 ceph-mon[79167]: osdmap e80: 3 total, 3 up, 3 in
Oct 10 09:48:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:08 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:08 compute-1 python3.9[86215]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:48:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:09 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e81 e81: 3 total, 3 up, 3 in
Oct 10 09:48:09 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.0 scrub starts
Oct 10 09:48:09 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.0 scrub ok
Oct 10 09:48:09 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 81 pg[10.16( v 51'1091 (0'0,51'1091] local-lis/les=80/81 n=5 ec=56/45 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:09 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 81 pg[10.e( v 51'1091 (0'0,51'1091] local-lis/les=80/81 n=6 ec=56/45 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:09 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 81 pg[10.6( v 51'1091 (0'0,51'1091] local-lis/les=80/81 n=6 ec=56/45 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:09 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 81 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=80/81 n=5 ec=56/45 lis/c=78/66 les/c/f=79/67/0 sis=80) [1] r=0 lpr=80 pi=[66,80)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:09.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:09 compute-1 ceph-mon[79167]: 9.7 deep-scrub starts
Oct 10 09:48:09 compute-1 ceph-mon[79167]: 9.7 deep-scrub ok
Oct 10 09:48:09 compute-1 ceph-mon[79167]: 12.5 scrub starts
Oct 10 09:48:09 compute-1 ceph-mon[79167]: 12.5 scrub ok
Oct 10 09:48:09 compute-1 ceph-mon[79167]: 11.18 deep-scrub starts
Oct 10 09:48:09 compute-1 ceph-mon[79167]: 11.18 deep-scrub ok
Oct 10 09:48:09 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:09 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:09 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:09 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Oct 10 09:48:09 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:09 compute-1 ceph-mon[79167]: osdmap e81: 3 total, 3 up, 3 in
Oct 10 09:48:09 compute-1 ceph-mgr[79476]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 10 09:48:09 compute-1 ceph-mgr[79476]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 10 09:48:09 compute-1 ceph-mgr[79476]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 10 09:48:09 compute-1 ceph-mgr[79476]: mgr respawn  1: '-n'
Oct 10 09:48:09 compute-1 ceph-mgr[79476]: mgr respawn  2: 'mgr.compute-1.rfugxc'
Oct 10 09:48:09 compute-1 ceph-mgr[79476]: mgr respawn  3: '-f'
Oct 10 09:48:09 compute-1 ceph-mgr[79476]: mgr respawn  4: '--setuser'
Oct 10 09:48:09 compute-1 ceph-mgr[79476]: mgr respawn  5: 'ceph'
Oct 10 09:48:09 compute-1 ceph-mgr[79476]: mgr respawn  6: '--setgroup'
Oct 10 09:48:09 compute-1 ceph-mgr[79476]: mgr respawn  7: 'ceph'
Oct 10 09:48:09 compute-1 ceph-mgr[79476]: mgr respawn  8: '--default-log-to-file=false'
Oct 10 09:48:09 compute-1 ceph-mgr[79476]: mgr respawn  9: '--default-log-to-journald=true'
Oct 10 09:48:09 compute-1 ceph-mgr[79476]: mgr respawn  10: '--default-log-to-stderr=false'
Oct 10 09:48:09 compute-1 ceph-mgr[79476]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct 10 09:48:09 compute-1 ceph-mgr[79476]: mgr respawn  exe_path /proc/self/exe
Oct 10 09:48:09 compute-1 sshd-session[82241]: Connection closed by 192.168.122.100 port 50142
Oct 10 09:48:09 compute-1 sshd-session[82222]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 10 09:48:09 compute-1 systemd[1]: session-35.scope: Deactivated successfully.
Oct 10 09:48:09 compute-1 systemd[1]: session-35.scope: Consumed 23.239s CPU time.
Oct 10 09:48:09 compute-1 systemd-logind[789]: Session 35 logged out. Waiting for processes to exit.
Oct 10 09:48:09 compute-1 systemd-logind[789]: Removed session 35.
Oct 10 09:48:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: ignoring --setuser ceph since I am not root
Oct 10 09:48:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: ignoring --setgroup ceph since I am not root
Oct 10 09:48:09 compute-1 ceph-mgr[79476]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 10 09:48:09 compute-1 ceph-mgr[79476]: pidfile_write: ignore empty --pid-file
Oct 10 09:48:09 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'alerts'
Oct 10 09:48:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:48:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:09.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:48:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:09.981+0000 7f73f2c98140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 09:48:09 compute-1 ceph-mgr[79476]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 10 09:48:09 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'balancer'
Oct 10 09:48:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:10.062+0000 7f73f2c98140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 09:48:10 compute-1 ceph-mgr[79476]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 10 09:48:10 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'cephadm'
Oct 10 09:48:10 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e82 e82: 3 total, 3 up, 3 in
Oct 10 09:48:10 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 82 pg[10.7( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=80/65 les/c/f=81/66/0 sis=82) [1] r=0 lpr=82 pi=[65,82)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:10 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 82 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=80/65 les/c/f=81/66/0 sis=82) [1] r=0 lpr=82 pi=[65,82)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:10 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 82 pg[10.7( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=80/65 les/c/f=81/66/0 sis=82) [1] r=0 lpr=82 pi=[65,82)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:10 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 82 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=80/65 les/c/f=81/66/0 sis=82) [1] r=0 lpr=82 pi=[65,82)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:10 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 82 pg[10.17( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=3 ec=56/45 lis/c=80/64 les/c/f=81/65/0 sis=82) [1] r=0 lpr=82 pi=[64,82)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:10 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 82 pg[10.17( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=3 ec=56/45 lis/c=80/64 les/c/f=81/65/0 sis=82) [1] r=0 lpr=82 pi=[64,82)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:10 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 82 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=80/65 les/c/f=81/66/0 sis=82) [1] r=0 lpr=82 pi=[65,82)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:10 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 82 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=80/65 les/c/f=81/66/0 sis=82) [1] r=0 lpr=82 pi=[65,82)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:10 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.1f scrub starts
Oct 10 09:48:10 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.1f scrub ok
Oct 10 09:48:10 compute-1 sudo[86459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfqndoacsdeyxpdeoomphmcavjdzkmdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089690.1052194-57-125095691946726/AnsiballZ_command.py'
Oct 10 09:48:10 compute-1 sudo[86459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:48:10 compute-1 ceph-mon[79167]: 8.b scrub starts
Oct 10 09:48:10 compute-1 ceph-mon[79167]: 8.b scrub ok
Oct 10 09:48:10 compute-1 ceph-mon[79167]: 12.0 scrub starts
Oct 10 09:48:10 compute-1 ceph-mon[79167]: 12.0 scrub ok
Oct 10 09:48:10 compute-1 ceph-mon[79167]: 9.1a scrub starts
Oct 10 09:48:10 compute-1 ceph-mon[79167]: 9.1a scrub ok
Oct 10 09:48:10 compute-1 ceph-mon[79167]: from='mgr.14442 192.168.122.100:0/3496043196' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Oct 10 09:48:10 compute-1 ceph-mon[79167]: mgrmap e28: compute-0.xkdepb(active, since 96s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:48:10 compute-1 ceph-mon[79167]: osdmap e82: 3 total, 3 up, 3 in
Oct 10 09:48:10 compute-1 python3.9[86461]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:48:10 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'crash'
Oct 10 09:48:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:10.860+0000 7f73f2c98140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 09:48:10 compute-1 ceph-mgr[79476]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 10 09:48:10 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'dashboard'
Oct 10 09:48:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:10 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:11 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:11 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'devicehealth'
Oct 10 09:48:11 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e83 e83: 3 total, 3 up, 3 in
Oct 10 09:48:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:11.491+0000 7f73f2c98140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 09:48:11 compute-1 ceph-mgr[79476]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 10 09:48:11 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'diskprediction_local'
Oct 10 09:48:11 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 83 pg[10.17( v 51'1091 (0'0,51'1091] local-lis/les=82/83 n=3 ec=56/45 lis/c=80/64 les/c/f=81/65/0 sis=82) [1] r=0 lpr=82 pi=[64,82)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:11 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:11 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 83 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=82/83 n=6 ec=56/45 lis/c=80/65 les/c/f=81/66/0 sis=82) [1] r=0 lpr=82 pi=[65,82)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:11 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 83 pg[10.7( v 51'1091 (0'0,51'1091] local-lis/les=82/83 n=5 ec=56/45 lis/c=80/65 les/c/f=81/66/0 sis=82) [1] r=0 lpr=82 pi=[65,82)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:11 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 83 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=82/83 n=5 ec=56/45 lis/c=80/65 les/c/f=81/66/0 sis=82) [1] r=0 lpr=82 pi=[65,82)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:11 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Oct 10 09:48:11 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Oct 10 09:48:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:11.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 10 09:48:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 10 09:48:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]:   from numpy import show_config as show_numpy_config
Oct 10 09:48:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:11.663+0000 7f73f2c98140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 09:48:11 compute-1 ceph-mgr[79476]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 10 09:48:11 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'influx'
Oct 10 09:48:11 compute-1 ceph-mon[79167]: 12.1e scrub starts
Oct 10 09:48:11 compute-1 ceph-mon[79167]: 12.1e scrub ok
Oct 10 09:48:11 compute-1 ceph-mon[79167]: 12.1f scrub starts
Oct 10 09:48:11 compute-1 ceph-mon[79167]: 12.1f scrub ok
Oct 10 09:48:11 compute-1 ceph-mon[79167]: 9.1b deep-scrub starts
Oct 10 09:48:11 compute-1 ceph-mon[79167]: 9.1b deep-scrub ok
Oct 10 09:48:11 compute-1 ceph-mon[79167]: osdmap e83: 3 total, 3 up, 3 in
Oct 10 09:48:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:11.734+0000 7f73f2c98140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 09:48:11 compute-1 ceph-mgr[79476]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 10 09:48:11 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'insights'
Oct 10 09:48:11 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'iostat'
Oct 10 09:48:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:11.878+0000 7f73f2c98140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 09:48:11 compute-1 ceph-mgr[79476]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 10 09:48:11 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'k8sevents'
Oct 10 09:48:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:11.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:48:12 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'localpool'
Oct 10 09:48:12 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'mds_autoscaler'
Oct 10 09:48:12 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.1b scrub starts
Oct 10 09:48:12 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'mirroring'
Oct 10 09:48:12 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.1b scrub ok
Oct 10 09:48:12 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'nfs'
Oct 10 09:48:12 compute-1 ceph-mon[79167]: 11.8 scrub starts
Oct 10 09:48:12 compute-1 ceph-mon[79167]: 11.8 scrub ok
Oct 10 09:48:12 compute-1 ceph-mon[79167]: 7.17 scrub starts
Oct 10 09:48:12 compute-1 ceph-mon[79167]: 8.1a scrub starts
Oct 10 09:48:12 compute-1 ceph-mon[79167]: 7.17 scrub ok
Oct 10 09:48:12 compute-1 ceph-mon[79167]: 8.1a scrub ok
Oct 10 09:48:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:12.844+0000 7f73f2c98140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 09:48:12 compute-1 ceph-mgr[79476]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 10 09:48:12 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'orchestrator'
Oct 10 09:48:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:12 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004050 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:13.056+0000 7f73f2c98140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 09:48:13 compute-1 ceph-mgr[79476]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 10 09:48:13 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'osd_perf_query'
Oct 10 09:48:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:13.129+0000 7f73f2c98140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 09:48:13 compute-1 ceph-mgr[79476]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 10 09:48:13 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'osd_support'
Oct 10 09:48:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:13.196+0000 7f73f2c98140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 09:48:13 compute-1 ceph-mgr[79476]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 10 09:48:13 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'pg_autoscaler'
Oct 10 09:48:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:13 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:13.277+0000 7f73f2c98140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 09:48:13 compute-1 ceph-mgr[79476]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 10 09:48:13 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'progress'
Oct 10 09:48:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:13.354+0000 7f73f2c98140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 09:48:13 compute-1 ceph-mgr[79476]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 10 09:48:13 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'prometheus'
Oct 10 09:48:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:13 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:13 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.16 deep-scrub starts
Oct 10 09:48:13 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.16 deep-scrub ok
Oct 10 09:48:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:13.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:13 compute-1 ceph-mgr[79476]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 09:48:13 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'rbd_support'
Oct 10 09:48:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:13.691+0000 7f73f2c98140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 10 09:48:13 compute-1 ceph-mon[79167]: 8.a scrub starts
Oct 10 09:48:13 compute-1 ceph-mon[79167]: 8.a scrub ok
Oct 10 09:48:13 compute-1 ceph-mon[79167]: 12.1b scrub starts
Oct 10 09:48:13 compute-1 ceph-mon[79167]: 12.1b scrub ok
Oct 10 09:48:13 compute-1 ceph-mon[79167]: 11.6 scrub starts
Oct 10 09:48:13 compute-1 ceph-mon[79167]: 11.6 scrub ok
Oct 10 09:48:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:13.787+0000 7f73f2c98140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 09:48:13 compute-1 ceph-mgr[79476]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 10 09:48:13 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'restful'
Oct 10 09:48:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:13.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:14 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'rgw'
Oct 10 09:48:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:14.227+0000 7f73f2c98140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 09:48:14 compute-1 ceph-mgr[79476]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 10 09:48:14 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'rook'
Oct 10 09:48:14 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.14 scrub starts
Oct 10 09:48:14 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.14 scrub ok
Oct 10 09:48:14 compute-1 ceph-mon[79167]: 9.16 scrub starts
Oct 10 09:48:14 compute-1 ceph-mon[79167]: 9.16 scrub ok
Oct 10 09:48:14 compute-1 ceph-mon[79167]: 12.16 deep-scrub starts
Oct 10 09:48:14 compute-1 ceph-mon[79167]: 9.19 scrub starts
Oct 10 09:48:14 compute-1 ceph-mon[79167]: 12.16 deep-scrub ok
Oct 10 09:48:14 compute-1 ceph-mon[79167]: 9.19 scrub ok
Oct 10 09:48:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:14.776+0000 7f73f2c98140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 09:48:14 compute-1 ceph-mgr[79476]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 10 09:48:14 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'selftest'
Oct 10 09:48:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:14.842+0000 7f73f2c98140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 09:48:14 compute-1 ceph-mgr[79476]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 10 09:48:14 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'snap_schedule'
Oct 10 09:48:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:14 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:14.915+0000 7f73f2c98140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 09:48:14 compute-1 ceph-mgr[79476]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 10 09:48:14 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'stats'
Oct 10 09:48:14 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'status'
Oct 10 09:48:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:15.055+0000 7f73f2c98140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 09:48:15 compute-1 ceph-mgr[79476]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 10 09:48:15 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'telegraf'
Oct 10 09:48:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:15.129+0000 7f73f2c98140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 09:48:15 compute-1 ceph-mgr[79476]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 10 09:48:15 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'telemetry'
Oct 10 09:48:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:15 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004050 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:15.275+0000 7f73f2c98140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 09:48:15 compute-1 ceph-mgr[79476]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 10 09:48:15 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'test_orchestrator'
Oct 10 09:48:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:15.479+0000 7f73f2c98140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 09:48:15 compute-1 ceph-mgr[79476]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 10 09:48:15 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'volumes'
Oct 10 09:48:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:15 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:15.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:15 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.1 scrub starts
Oct 10 09:48:15 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 12.1 scrub ok
Oct 10 09:48:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:15.741+0000 7f73f2c98140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 09:48:15 compute-1 ceph-mgr[79476]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 10 09:48:15 compute-1 ceph-mgr[79476]: mgr[py] Loading python module 'zabbix'
Oct 10 09:48:15 compute-1 ceph-mon[79167]: 9.b deep-scrub starts
Oct 10 09:48:15 compute-1 ceph-mon[79167]: 9.b deep-scrub ok
Oct 10 09:48:15 compute-1 ceph-mon[79167]: 12.14 scrub starts
Oct 10 09:48:15 compute-1 ceph-mon[79167]: 9.1e scrub starts
Oct 10 09:48:15 compute-1 ceph-mon[79167]: 12.14 scrub ok
Oct 10 09:48:15 compute-1 ceph-mon[79167]: 9.1e scrub ok
Oct 10 09:48:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 2025-10-10T09:48:15.807+0000 7f73f2c98140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 09:48:15 compute-1 ceph-mgr[79476]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 10 09:48:15 compute-1 ceph-mgr[79476]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:48:15 compute-1 ceph-mgr[79476]: mgr load Constructed class from module: dashboard
Oct 10 09:48:15 compute-1 ceph-mgr[79476]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 10 09:48:15 compute-1 ceph-mgr[79476]: mgr load Constructed class from module: prometheus
Oct 10 09:48:15 compute-1 ceph-mgr[79476]: [dashboard INFO root] server: ssl=no host=192.168.122.101 port=8443
Oct 10 09:48:15 compute-1 ceph-mgr[79476]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct 10 09:48:15 compute-1 ceph-mgr[79476]: [dashboard INFO root] Starting engine...
Oct 10 09:48:15 compute-1 ceph-mgr[79476]: [prometheus INFO root] server_addr: :: server_port: 9283
Oct 10 09:48:15 compute-1 ceph-mgr[79476]: [prometheus INFO root] Starting engine...
Oct 10 09:48:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: [10/Oct/2025:09:48:15] ENGINE Bus STARTING
Oct 10 09:48:15 compute-1 ceph-mgr[79476]: [prometheus INFO cherrypy.error] [10/Oct/2025:09:48:15] ENGINE Bus STARTING
Oct 10 09:48:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: CherryPy Checker:
Oct 10 09:48:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: The Application mounted at '' has an empty config.
Oct 10 09:48:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: 
Oct 10 09:48:15 compute-1 ceph-mgr[79476]: ms_deliver_dispatch: unhandled message 0x5626eff61860 mon_map magic: 0 from mon.2 v2:192.168.122.101:3300/0
Oct 10 09:48:15 compute-1 ceph-mgr[79476]: [dashboard INFO root] Engine started...
Oct 10 09:48:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: [10/Oct/2025:09:48:15] ENGINE Serving on http://:::9283
Oct 10 09:48:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-mgr-compute-1-rfugxc[79472]: [10/Oct/2025:09:48:15] ENGINE Bus STARTED
Oct 10 09:48:15 compute-1 ceph-mgr[79476]: [prometheus INFO cherrypy.error] [10/Oct/2025:09:48:15] ENGINE Serving on http://:::9283
Oct 10 09:48:15 compute-1 ceph-mgr[79476]: [prometheus INFO cherrypy.error] [10/Oct/2025:09:48:15] ENGINE Bus STARTED
Oct 10 09:48:15 compute-1 ceph-mgr[79476]: [prometheus INFO root] Engine started.
Oct 10 09:48:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:15.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:16 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e84 e84: 3 total, 3 up, 3 in
Oct 10 09:48:16 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Oct 10 09:48:16 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Oct 10 09:48:16 compute-1 sshd-session[86513]: Accepted publickey for ceph-admin from 192.168.122.100 port 41558 ssh2: RSA SHA256:iFwOnwcB2x2Q1gpAWZobZa2jCZZy75CuUHv4ViVnHA0
Oct 10 09:48:16 compute-1 systemd-logind[789]: New session 38 of user ceph-admin.
Oct 10 09:48:16 compute-1 ceph-mon[79167]: 11.13 scrub starts
Oct 10 09:48:16 compute-1 ceph-mon[79167]: 11.13 scrub ok
Oct 10 09:48:16 compute-1 ceph-mon[79167]: 9.1f scrub starts
Oct 10 09:48:16 compute-1 ceph-mon[79167]: 9.1f scrub ok
Oct 10 09:48:16 compute-1 ceph-mon[79167]: 12.1 scrub starts
Oct 10 09:48:16 compute-1 ceph-mon[79167]: 12.1 scrub ok
Oct 10 09:48:16 compute-1 ceph-mon[79167]: Standby manager daemon compute-1.rfugxc restarted
Oct 10 09:48:16 compute-1 ceph-mon[79167]: Standby manager daemon compute-1.rfugxc started
Oct 10 09:48:16 compute-1 ceph-mon[79167]: Standby manager daemon compute-2.gkrssp restarted
Oct 10 09:48:16 compute-1 ceph-mon[79167]: Standby manager daemon compute-2.gkrssp started
Oct 10 09:48:16 compute-1 ceph-mon[79167]: Active manager daemon compute-0.xkdepb restarted
Oct 10 09:48:16 compute-1 ceph-mon[79167]: Activating manager daemon compute-0.xkdepb
Oct 10 09:48:16 compute-1 ceph-mon[79167]: osdmap e84: 3 total, 3 up, 3 in
Oct 10 09:48:16 compute-1 ceph-mon[79167]: mgrmap e29: compute-0.xkdepb(active, starting, since 0.0333518s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:48:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 10 09:48:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 10 09:48:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 10 09:48:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.cchwlo"}]: dispatch
Oct 10 09:48:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.fhagzt"}]: dispatch
Oct 10 09:48:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.vlgajy"}]: dispatch
Oct 10 09:48:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-0.xkdepb", "id": "compute-0.xkdepb"}]: dispatch
Oct 10 09:48:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-1.rfugxc", "id": "compute-1.rfugxc"}]: dispatch
Oct 10 09:48:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mgr metadata", "who": "compute-2.gkrssp", "id": "compute-2.gkrssp"}]: dispatch
Oct 10 09:48:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 10 09:48:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 10 09:48:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 10 09:48:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 10 09:48:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 10 09:48:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 10 09:48:16 compute-1 ceph-mon[79167]: Manager daemon compute-0.xkdepb is now available
Oct 10 09:48:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:48:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/mirror_snapshot_schedule"}]: dispatch
Oct 10 09:48:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.xkdepb/trash_purge_schedule"}]: dispatch
Oct 10 09:48:16 compute-1 systemd[1]: Started Session 38 of User ceph-admin.
Oct 10 09:48:16 compute-1 sshd-session[86513]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 10 09:48:16 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:16 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:16 compute-1 sudo[86522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:48:16 compute-1 sudo[86522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:16 compute-1 sudo[86522]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:17 compute-1 sudo[86548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 10 09:48:17 compute-1 sudo[86548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:48:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:17 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:17 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004050 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:17.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:17 compute-1 sudo[86459]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:17 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Oct 10 09:48:17 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Oct 10 09:48:17 compute-1 podman[86646]: 2025-10-10 09:48:17.731810635 +0000 UTC m=+0.122208023 container exec 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Oct 10 09:48:17 compute-1 ceph-mon[79167]: 11.a scrub starts
Oct 10 09:48:17 compute-1 ceph-mon[79167]: 11.a scrub ok
Oct 10 09:48:17 compute-1 ceph-mon[79167]: 8.1e scrub starts
Oct 10 09:48:17 compute-1 ceph-mon[79167]: 8.1e scrub ok
Oct 10 09:48:17 compute-1 ceph-mon[79167]: 9.15 scrub starts
Oct 10 09:48:17 compute-1 ceph-mon[79167]: 9.15 scrub ok
Oct 10 09:48:17 compute-1 ceph-mon[79167]: mgrmap e30: compute-0.xkdepb(active, since 1.06227s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:48:17 compute-1 podman[86646]: 2025-10-10 09:48:17.839894995 +0000 UTC m=+0.230292323 container exec_died 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:48:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:48:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:17.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:48:18 compute-1 sshd-session[86065]: Connection closed by 192.168.122.30 port 50166
Oct 10 09:48:18 compute-1 sshd-session[86062]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:48:18 compute-1 systemd-logind[789]: Session 37 logged out. Waiting for processes to exit.
Oct 10 09:48:18 compute-1 systemd[1]: session-37.scope: Deactivated successfully.
Oct 10 09:48:18 compute-1 systemd[1]: session-37.scope: Consumed 8.775s CPU time.
Oct 10 09:48:18 compute-1 systemd-logind[789]: Removed session 37.
Oct 10 09:48:18 compute-1 podman[86786]: 2025-10-10 09:48:18.43302946 +0000 UTC m=+0.072866886 container exec db44b00524aaf85460d5de4878ad14e99cea939212a287f5217a7de1b31f7321 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:18 compute-1 podman[86786]: 2025-10-10 09:48:18.447730036 +0000 UTC m=+0.087567452 container exec_died db44b00524aaf85460d5de4878ad14e99cea939212a287f5217a7de1b31f7321 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:48:18 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Oct 10 09:48:18 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Oct 10 09:48:18 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e85 e85: 3 total, 3 up, 3 in
Oct 10 09:48:18 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 85 pg[10.8( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=7 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=85 pruub=8.015638351s) [0] r=-1 lpr=85 pi=[56,85)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 211.768310547s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:18 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 85 pg[10.8( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=7 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=85 pruub=8.015065193s) [0] r=-1 lpr=85 pi=[56,85)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.768310547s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:18 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 85 pg[10.18( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=4 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=85 pruub=8.009293556s) [0] r=-1 lpr=85 pi=[56,85)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 211.763580322s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:18 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 85 pg[10.18( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=4 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=85 pruub=8.009239197s) [0] r=-1 lpr=85 pi=[56,85)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.763580322s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:18 compute-1 ceph-mon[79167]: 9.3 scrub starts
Oct 10 09:48:18 compute-1 ceph-mon[79167]: 9.3 scrub ok
Oct 10 09:48:18 compute-1 ceph-mon[79167]: pgmap v3: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:18 compute-1 ceph-mon[79167]: 9.1c scrub starts
Oct 10 09:48:18 compute-1 ceph-mon[79167]: 9.1c scrub ok
Oct 10 09:48:18 compute-1 ceph-mon[79167]: 9.10 scrub starts
Oct 10 09:48:18 compute-1 ceph-mon[79167]: 9.10 scrub ok
Oct 10 09:48:18 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 10 09:48:18 compute-1 ceph-mon[79167]: 11.1f scrub starts
Oct 10 09:48:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:18 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:18 compute-1 podman[86878]: 2025-10-10 09:48:18.920255755 +0000 UTC m=+0.089401754 container exec 2391dd632d14ec9648c3d8d1edd069f6584c3097475f03ae8ea909b98a6066a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct 10 09:48:18 compute-1 podman[86878]: 2025-10-10 09:48:18.941749125 +0000 UTC m=+0.110895124 container exec_died 2391dd632d14ec9648c3d8d1edd069f6584c3097475f03ae8ea909b98a6066a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 10 09:48:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:19 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:19 compute-1 podman[86944]: 2025-10-10 09:48:19.266524567 +0000 UTC m=+0.077460210 container exec 7ce1be4884f313700b9f3fe6c66e14b163c4d6bf31089a45b751a6389f8be562 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw)
Oct 10 09:48:19 compute-1 podman[86944]: 2025-10-10 09:48:19.281046929 +0000 UTC m=+0.091982532 container exec_died 7ce1be4884f313700b9f3fe6c66e14b163c4d6bf31089a45b751a6389f8be562 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw)
Oct 10 09:48:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:19 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000022s ======
Oct 10 09:48:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:19.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Oct 10 09:48:19 compute-1 podman[87007]: 2025-10-10 09:48:19.616358092 +0000 UTC m=+0.085841873 container exec 8efe4c511a9ca69c4fbe77af128c823fd37bedd1e530d3bfb1fc1044e25a5138 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, release=1793, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph.)
Oct 10 09:48:19 compute-1 podman[87007]: 2025-10-10 09:48:19.635908089 +0000 UTC m=+0.105391830 container exec_died 8efe4c511a9ca69c4fbe77af128c823fd37bedd1e530d3bfb1fc1044e25a5138 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=keepalived-container, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=2.2.4, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=keepalived for Ceph, release=1793)
Oct 10 09:48:19 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Oct 10 09:48:19 compute-1 sudo[86548]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:19 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Oct 10 09:48:19 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e86 e86: 3 total, 3 up, 3 in
Oct 10 09:48:19 compute-1 sudo[87041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:48:19 compute-1 sudo[87041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:19 compute-1 sudo[87041]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:19 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 86 pg[10.8( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=7 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=86) [0]/[1] r=0 lpr=86 pi=[56,86)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:19 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 86 pg[10.18( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=4 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=86) [0]/[1] r=0 lpr=86 pi=[56,86)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:19 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 86 pg[10.18( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=4 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=86) [0]/[1] r=0 lpr=86 pi=[56,86)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:19 compute-1 ceph-mon[79167]: [10/Oct/2025:09:48:17] ENGINE Bus STARTING
Oct 10 09:48:19 compute-1 ceph-mon[79167]: 8.9 scrub starts
Oct 10 09:48:19 compute-1 ceph-mon[79167]: [10/Oct/2025:09:48:17] ENGINE Serving on http://192.168.122.100:8765
Oct 10 09:48:19 compute-1 ceph-mon[79167]: 8.9 scrub ok
Oct 10 09:48:19 compute-1 ceph-mon[79167]: [10/Oct/2025:09:48:18] ENGINE Serving on https://192.168.122.100:7150
Oct 10 09:48:19 compute-1 ceph-mon[79167]: [10/Oct/2025:09:48:18] ENGINE Bus STARTED
Oct 10 09:48:19 compute-1 ceph-mon[79167]: [10/Oct/2025:09:48:18] ENGINE Client ('192.168.122.100', 53560) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 10 09:48:19 compute-1 ceph-mon[79167]: pgmap v4: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:19 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 86 pg[10.8( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=7 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=86) [0]/[1] r=0 lpr=86 pi=[56,86)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:19 compute-1 ceph-mon[79167]: 11.1f scrub ok
Oct 10 09:48:19 compute-1 ceph-mon[79167]: 11.12 scrub starts
Oct 10 09:48:19 compute-1 ceph-mon[79167]: 11.12 scrub ok
Oct 10 09:48:19 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 10 09:48:19 compute-1 ceph-mon[79167]: osdmap e85: 3 total, 3 up, 3 in
Oct 10 09:48:19 compute-1 ceph-mon[79167]: mgrmap e31: compute-0.xkdepb(active, since 2s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:48:19 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:19 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:19 compute-1 ceph-mon[79167]: 11.10 scrub starts
Oct 10 09:48:19 compute-1 ceph-mon[79167]: 11.10 scrub ok
Oct 10 09:48:19 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:19 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:19 compute-1 sudo[87066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 09:48:19 compute-1 sudo[87066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:19.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:20 compute-1 sudo[87066]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:20 compute-1 sudo[87122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:48:20 compute-1 sudo[87122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:20 compute-1 sudo[87122]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:20 compute-1 sudo[87147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Oct 10 09:48:20 compute-1 sudo[87147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:20 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.d deep-scrub starts
Oct 10 09:48:20 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.d deep-scrub ok
Oct 10 09:48:20 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e87 e87: 3 total, 3 up, 3 in
Oct 10 09:48:20 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 87 pg[10.19( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=87) [1] r=0 lpr=87 pi=[65,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:20 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 87 pg[10.9( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=87) [1] r=0 lpr=87 pi=[65,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:20 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 87 pg[10.18( v 51'1091 (0'0,51'1091] local-lis/les=86/87 n=4 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=86) [0]/[1] async=[0] r=0 lpr=86 pi=[56,86)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:20 compute-1 ceph-mon[79167]: 9.8 scrub starts
Oct 10 09:48:20 compute-1 ceph-mon[79167]: 9.8 scrub ok
Oct 10 09:48:20 compute-1 ceph-mon[79167]: 11.1 scrub starts
Oct 10 09:48:20 compute-1 ceph-mon[79167]: 11.1 scrub ok
Oct 10 09:48:20 compute-1 ceph-mon[79167]: osdmap e86: 3 total, 3 up, 3 in
Oct 10 09:48:20 compute-1 ceph-mon[79167]: 8.2 scrub starts
Oct 10 09:48:20 compute-1 ceph-mon[79167]: 8.2 scrub ok
Oct 10 09:48:20 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:20 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:20 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 10 09:48:20 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 10 09:48:20 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:20 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:20 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 87 pg[10.8( v 51'1091 (0'0,51'1091] local-lis/les=86/87 n=7 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=86) [0]/[1] async=[0] r=0 lpr=86 pi=[56,86)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:20 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:21 compute-1 sudo[87147]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:21 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:21 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0002740 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:21.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:21 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.f deep-scrub starts
Oct 10 09:48:21 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.f deep-scrub ok
Oct 10 09:48:21 compute-1 sudo[87191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 10 09:48:21 compute-1 sudo[87191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:21 compute-1 sudo[87191]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:21 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e88 e88: 3 total, 3 up, 3 in
Oct 10 09:48:21 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 88 pg[10.9( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=88) [1]/[2] r=-1 lpr=88 pi=[65,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:21 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 88 pg[10.9( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=88) [1]/[2] r=-1 lpr=88 pi=[65,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:21 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 88 pg[10.8( v 51'1091 (0'0,51'1091] local-lis/les=86/87 n=7 ec=56/45 lis/c=86/56 les/c/f=87/57/0 sis=88 pruub=15.009707451s) [0] async=[0] r=-1 lpr=88 pi=[56,88)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 221.819091797s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:21 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 88 pg[10.8( v 51'1091 (0'0,51'1091] local-lis/les=86/87 n=7 ec=56/45 lis/c=86/56 les/c/f=87/57/0 sis=88 pruub=15.009644508s) [0] r=-1 lpr=88 pi=[56,88)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.819091797s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:21 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 88 pg[10.19( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=88) [1]/[2] r=-1 lpr=88 pi=[65,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:21 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 88 pg[10.19( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=88) [1]/[2] r=-1 lpr=88 pi=[65,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:21 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 88 pg[10.18( v 51'1091 (0'0,51'1091] local-lis/les=86/87 n=4 ec=56/45 lis/c=86/56 les/c/f=87/57/0 sis=88 pruub=14.999447823s) [0] async=[0] r=-1 lpr=88 pi=[56,88)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 221.809829712s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:21 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 88 pg[10.18( v 51'1091 (0'0,51'1091] local-lis/les=86/87 n=4 ec=56/45 lis/c=86/56 les/c/f=87/57/0 sis=88 pruub=14.999012947s) [0] r=-1 lpr=88 pi=[56,88)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 221.809829712s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:21 compute-1 ceph-mon[79167]: pgmap v7: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:21 compute-1 ceph-mon[79167]: 8.13 scrub starts
Oct 10 09:48:21 compute-1 ceph-mon[79167]: 8.13 scrub ok
Oct 10 09:48:21 compute-1 ceph-mon[79167]: 9.d deep-scrub starts
Oct 10 09:48:21 compute-1 ceph-mon[79167]: 9.d deep-scrub ok
Oct 10 09:48:21 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 10 09:48:21 compute-1 ceph-mon[79167]: osdmap e87: 3 total, 3 up, 3 in
Oct 10 09:48:21 compute-1 ceph-mon[79167]: 9.17 scrub starts
Oct 10 09:48:21 compute-1 ceph-mon[79167]: 9.17 scrub ok
Oct 10 09:48:21 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:21 compute-1 ceph-mon[79167]: mgrmap e32: compute-0.xkdepb(active, since 4s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:48:21 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:21 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 10 09:48:21 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Oct 10 09:48:21 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:21 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:21 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 10 09:48:21 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:48:21 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:48:21 compute-1 ceph-mon[79167]: osdmap e88: 3 total, 3 up, 3 in
Oct 10 09:48:21 compute-1 sudo[87216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph
Oct 10 09:48:21 compute-1 sudo[87216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:21 compute-1 sudo[87216]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:48:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:21.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:48:21 compute-1 sudo[87241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:48:21 compute-1 sudo[87241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:21 compute-1 sudo[87241]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:22 compute-1 sudo[87266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:48:22 compute-1 sudo[87266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:22 compute-1 sudo[87266]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:22 compute-1 sudo[87291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:48:22 compute-1 sudo[87291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:22 compute-1 sudo[87291]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:48:22 compute-1 sudo[87339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:48:22 compute-1 sudo[87339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:22 compute-1 sudo[87339]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:22 compute-1 sudo[87364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new
Oct 10 09:48:22 compute-1 sudo[87364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:22 compute-1 sudo[87364]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:22 compute-1 sudo[87389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 10 09:48:22 compute-1 sudo[87389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:22 compute-1 sudo[87389]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:22 compute-1 sudo[87414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:48:22 compute-1 sudo[87414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:22 compute-1 sudo[87414]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:22 compute-1 sudo[87439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:48:22 compute-1 sudo[87439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:22 compute-1 sudo[87439]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:22 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.e scrub starts
Oct 10 09:48:22 compute-1 sudo[87464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:48:22 compute-1 sudo[87464]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:22 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.e scrub ok
Oct 10 09:48:22 compute-1 sudo[87464]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e89 e89: 3 total, 3 up, 3 in
Oct 10 09:48:22 compute-1 sudo[87489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:48:22 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 89 pg[10.1a( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=89) [1] r=0 lpr=89 pi=[66,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:22 compute-1 sudo[87489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:22 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 89 pg[10.a( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=89) [1] r=0 lpr=89 pi=[66,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:22 compute-1 sudo[87489]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:22 compute-1 ceph-mon[79167]: 11.11 deep-scrub starts
Oct 10 09:48:22 compute-1 ceph-mon[79167]: 11.11 deep-scrub ok
Oct 10 09:48:22 compute-1 ceph-mon[79167]: Updating compute-0:/etc/ceph/ceph.conf
Oct 10 09:48:22 compute-1 ceph-mon[79167]: Updating compute-1:/etc/ceph/ceph.conf
Oct 10 09:48:22 compute-1 ceph-mon[79167]: Updating compute-2:/etc/ceph/ceph.conf
Oct 10 09:48:22 compute-1 ceph-mon[79167]: 11.f deep-scrub starts
Oct 10 09:48:22 compute-1 ceph-mon[79167]: 11.f deep-scrub ok
Oct 10 09:48:22 compute-1 ceph-mon[79167]: 8.1f scrub starts
Oct 10 09:48:22 compute-1 ceph-mon[79167]: 8.1f scrub ok
Oct 10 09:48:22 compute-1 ceph-mon[79167]: pgmap v10: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:22 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 10 09:48:22 compute-1 ceph-mon[79167]: Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:48:22 compute-1 ceph-mon[79167]: Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:48:22 compute-1 ceph-mon[79167]: Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:48:22 compute-1 ceph-mon[79167]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:48:22 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 10 09:48:22 compute-1 ceph-mon[79167]: osdmap e89: 3 total, 3 up, 3 in
Oct 10 09:48:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:22 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:22 compute-1 sudo[87514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:48:22 compute-1 sudo[87514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:22 compute-1 sudo[87514]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:23 compute-1 sudo[87564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:48:23 compute-1 sudo[87564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:23 compute-1 sudo[87564]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:23 compute-1 sudo[87589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new
Oct 10 09:48:23 compute-1 sudo[87589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:23 compute-1 sudo[87589]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:23 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:23 compute-1 sudo[87614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf.new /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.conf
Oct 10 09:48:23 compute-1 sudo[87614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:23 compute-1 sudo[87614]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:23 compute-1 sudo[87640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 10 09:48:23 compute-1 sudo[87640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:23 compute-1 sudo[87640]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:23 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e90 e90: 3 total, 3 up, 3 in
Oct 10 09:48:23 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 90 pg[10.9( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=88/65 les/c/f=89/66/0 sis=90) [1] r=0 lpr=90 pi=[65,90)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:23 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 90 pg[10.9( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=88/65 les/c/f=89/66/0 sis=90) [1] r=0 lpr=90 pi=[65,90)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:23 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 90 pg[10.a( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[66,90)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:23 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 90 pg[10.a( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[66,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:23 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 90 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=88/65 les/c/f=89/66/0 sis=90) [1] r=0 lpr=90 pi=[65,90)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:23 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 90 pg[10.1a( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[66,90)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:23 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 90 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=88/65 les/c/f=89/66/0 sis=90) [1] r=0 lpr=90 pi=[65,90)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:23 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 90 pg[10.1a( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=66/66 les/c/f=67/67/0 sis=90) [1]/[0] r=-1 lpr=90 pi=[66,90)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:23 compute-1 sudo[87665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph
Oct 10 09:48:23 compute-1 sudo[87665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:23 compute-1 sudo[87665]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:23 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388002c70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:23 compute-1 sudo[87690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new
Oct 10 09:48:23 compute-1 sudo[87690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:23 compute-1 sudo[87690]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:48:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:23.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:48:23 compute-1 sudo[87715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:48:23 compute-1 sudo[87715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:23 compute-1 sudo[87715]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:23 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Oct 10 09:48:23 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Oct 10 09:48:23 compute-1 sudo[87740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new
Oct 10 09:48:23 compute-1 sudo[87740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:23 compute-1 sudo[87740]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:23 compute-1 ceph-mon[79167]: 8.1d scrub starts
Oct 10 09:48:23 compute-1 ceph-mon[79167]: 8.1d scrub ok
Oct 10 09:48:23 compute-1 ceph-mon[79167]: 9.e scrub starts
Oct 10 09:48:23 compute-1 ceph-mon[79167]: 9.e scrub ok
Oct 10 09:48:23 compute-1 ceph-mon[79167]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:48:23 compute-1 ceph-mon[79167]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 10 09:48:23 compute-1 ceph-mon[79167]: Updating compute-2:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:48:23 compute-1 ceph-mon[79167]: osdmap e90: 3 total, 3 up, 3 in
Oct 10 09:48:23 compute-1 ceph-mon[79167]: Updating compute-0:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:48:23 compute-1 sudo[87788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new
Oct 10 09:48:23 compute-1 sudo[87788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:23 compute-1 sudo[87788]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:48:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:23.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:48:24 compute-1 sudo[87813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new
Oct 10 09:48:24 compute-1 sudo[87813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:24 compute-1 sudo[87813]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:24 compute-1 sudo[87838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Oct 10 09:48:24 compute-1 sudo[87838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:24 compute-1 sudo[87838]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:24 compute-1 sudo[87863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:48:24 compute-1 sudo[87863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:24 compute-1 sudo[87863]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:24 compute-1 sudo[87888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config
Oct 10 09:48:24 compute-1 sudo[87888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:24 compute-1 sudo[87888]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:24 compute-1 sudo[87913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new
Oct 10 09:48:24 compute-1 sudo[87913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:24 compute-1 sudo[87913]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:24 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e91 e91: 3 total, 3 up, 3 in
Oct 10 09:48:24 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 91 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=90/91 n=5 ec=56/45 lis/c=88/65 les/c/f=89/66/0 sis=90) [1] r=0 lpr=90 pi=[65,90)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:24 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 91 pg[10.9( v 51'1091 (0'0,51'1091] local-lis/les=90/91 n=6 ec=56/45 lis/c=88/65 les/c/f=89/66/0 sis=90) [1] r=0 lpr=90 pi=[65,90)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:24 compute-1 sudo[87938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4
Oct 10 09:48:24 compute-1 sudo[87938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:24 compute-1 sudo[87938]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:24 compute-1 sudo[87963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new
Oct 10 09:48:24 compute-1 sudo[87963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:24 compute-1 sudo[87963]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:24 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Oct 10 09:48:24 compute-1 sudo[88011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new
Oct 10 09:48:24 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Oct 10 09:48:24 compute-1 sudo[88011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:24 compute-1 sudo[88011]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:24 compute-1 sudo[88036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new
Oct 10 09:48:24 compute-1 sudo[88036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:24 compute-1 sudo[88036]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:24 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004410 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:24 compute-1 sudo[88061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-21f084a3-af34-5230-afe4-ea5cd24a55f4/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring.new /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:48:24 compute-1 sudo[88061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:24 compute-1 ceph-mon[79167]: 7.b scrub starts
Oct 10 09:48:24 compute-1 ceph-mon[79167]: 7.b scrub ok
Oct 10 09:48:24 compute-1 ceph-mon[79167]: 11.5 scrub starts
Oct 10 09:48:24 compute-1 ceph-mon[79167]: 11.5 scrub ok
Oct 10 09:48:24 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:24 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:24 compute-1 ceph-mon[79167]: Updating compute-1:/var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/config/ceph.client.admin.keyring
Oct 10 09:48:24 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:24 compute-1 ceph-mon[79167]: pgmap v13: 353 pgs: 2 remapped+peering, 2 peering, 349 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 0 B/s wr, 12 op/s; 54 B/s, 2 objects/s recovering
Oct 10 09:48:24 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:24 compute-1 ceph-mon[79167]: osdmap e91: 3 total, 3 up, 3 in
Oct 10 09:48:24 compute-1 sudo[88061]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:25 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:25 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e92 e92: 3 total, 3 up, 3 in
Oct 10 09:48:25 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 92 pg[10.a( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=90/66 les/c/f=91/67/0 sis=92) [1] r=0 lpr=92 pi=[66,92)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:25 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 92 pg[10.1a( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=90/66 les/c/f=91/67/0 sis=92) [1] r=0 lpr=92 pi=[66,92)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:25 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 92 pg[10.a( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=90/66 les/c/f=91/67/0 sis=92) [1] r=0 lpr=92 pi=[66,92)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:25 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 92 pg[10.1a( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=90/66 les/c/f=91/67/0 sis=92) [1] r=0 lpr=92 pi=[66,92)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:25 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:25.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:25 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Oct 10 09:48:25 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Oct 10 09:48:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000022s ======
Oct 10 09:48:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:25.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Oct 10 09:48:25 compute-1 ceph-mon[79167]: 11.4 scrub starts
Oct 10 09:48:25 compute-1 ceph-mon[79167]: 11.4 scrub ok
Oct 10 09:48:25 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:25 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:25 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:25 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:25 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:48:25 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:48:25 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:48:25 compute-1 ceph-mon[79167]: osdmap e92: 3 total, 3 up, 3 in
Oct 10 09:48:26 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e93 e93: 3 total, 3 up, 3 in
Oct 10 09:48:26 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 93 pg[10.1a( v 51'1091 (0'0,51'1091] local-lis/les=92/93 n=5 ec=56/45 lis/c=90/66 les/c/f=91/67/0 sis=92) [1] r=0 lpr=92 pi=[66,92)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:26 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 93 pg[10.a( v 51'1091 (0'0,51'1091] local-lis/les=92/93 n=6 ec=56/45 lis/c=90/66 les/c/f=91/67/0 sis=92) [1] r=0 lpr=92 pi=[66,92)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:26 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 8.4 deep-scrub starts
Oct 10 09:48:26 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 8.4 deep-scrub ok
Oct 10 09:48:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:26 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388002c70 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:26 compute-1 ceph-mon[79167]: 11.7 scrub starts
Oct 10 09:48:26 compute-1 ceph-mon[79167]: 11.7 scrub ok
Oct 10 09:48:26 compute-1 ceph-mon[79167]: pgmap v16: 353 pgs: 2 remapped+peering, 2 peering, 349 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 0 B/s wr, 12 op/s; 54 B/s, 2 objects/s recovering
Oct 10 09:48:26 compute-1 ceph-mon[79167]: osdmap e93: 3 total, 3 up, 3 in
Oct 10 09:48:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:48:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:27 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004410 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:27 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:27.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:27 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Oct 10 09:48:27 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Oct 10 09:48:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000022s ======
Oct 10 09:48:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:27.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Oct 10 09:48:27 compute-1 ceph-mon[79167]: 8.4 deep-scrub starts
Oct 10 09:48:27 compute-1 ceph-mon[79167]: 8.4 deep-scrub ok
Oct 10 09:48:28 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Oct 10 09:48:28 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Oct 10 09:48:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:28 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:29 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e94 e94: 3 total, 3 up, 3 in
Oct 10 09:48:29 compute-1 ceph-mon[79167]: 11.1b scrub starts
Oct 10 09:48:29 compute-1 ceph-mon[79167]: 11.1b scrub ok
Oct 10 09:48:29 compute-1 ceph-mon[79167]: 8.1c scrub starts
Oct 10 09:48:29 compute-1 ceph-mon[79167]: 8.1c scrub ok
Oct 10 09:48:29 compute-1 ceph-mon[79167]: pgmap v18: 353 pgs: 353 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 116 B/s, 5 objects/s recovering
Oct 10 09:48:29 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 10 09:48:29 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 94 pg[10.1b( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=94) [1] r=0 lpr=94 pi=[65,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:29 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 94 pg[10.b( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=94) [1] r=0 lpr=94 pi=[65,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:29 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388001d10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:29 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004410 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:29.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:29 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.1c deep-scrub starts
Oct 10 09:48:29 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.1c deep-scrub ok
Oct 10 09:48:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000023s ======
Oct 10 09:48:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:29.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Oct 10 09:48:30 compute-1 ceph-mon[79167]: 11.1d scrub starts
Oct 10 09:48:30 compute-1 ceph-mon[79167]: 11.1d scrub ok
Oct 10 09:48:30 compute-1 ceph-mon[79167]: 12.1a deep-scrub starts
Oct 10 09:48:30 compute-1 ceph-mon[79167]: 12.1a deep-scrub ok
Oct 10 09:48:30 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 10 09:48:30 compute-1 ceph-mon[79167]: osdmap e94: 3 total, 3 up, 3 in
Oct 10 09:48:30 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:30 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:30 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e95 e95: 3 total, 3 up, 3 in
Oct 10 09:48:30 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 95 pg[10.1b( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[65,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:30 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 95 pg[10.b( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[65,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:30 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 95 pg[10.1b( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[65,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:30 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 95 pg[10.b( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=65/65 les/c/f=66/66/0 sis=95) [1]/[2] r=-1 lpr=95 pi=[65,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:30 compute-1 sudo[88089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:48:30 compute-1 sudo[88089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:30 compute-1 sudo[88089]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:30 compute-1 sudo[88113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 09:48:30 compute-1 sudo[88113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:30 compute-1 sudo[88113]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:30 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Oct 10 09:48:30 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Oct 10 09:48:30 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:30 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004410 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:31 compute-1 ceph-mon[79167]: 12.1c scrub starts
Oct 10 09:48:31 compute-1 ceph-mon[79167]: 12.1c scrub ok
Oct 10 09:48:31 compute-1 ceph-mon[79167]: 11.1c deep-scrub starts
Oct 10 09:48:31 compute-1 ceph-mon[79167]: 11.1c deep-scrub ok
Oct 10 09:48:31 compute-1 ceph-mon[79167]: 12.17 scrub starts
Oct 10 09:48:31 compute-1 ceph-mon[79167]: 12.17 scrub ok
Oct 10 09:48:31 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:31 compute-1 ceph-mon[79167]: osdmap e95: 3 total, 3 up, 3 in
Oct 10 09:48:31 compute-1 ceph-mon[79167]: pgmap v21: 353 pgs: 353 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 116 B/s, 5 objects/s recovering
Oct 10 09:48:31 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 10 09:48:31 compute-1 ceph-mon[79167]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Oct 10 09:48:31 compute-1 ceph-mon[79167]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Oct 10 09:48:31 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e96 e96: 3 total, 3 up, 3 in
Oct 10 09:48:31 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 96 pg[10.c( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=72/72 les/c/f=73/73/0 sis=96) [1] r=0 lpr=96 pi=[72,96)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:31 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 96 pg[10.1c( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=72/72 les/c/f=73/73/0 sis=96) [1] r=0 lpr=96 pi=[72,96)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:31 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:31 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388001d10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:31.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:31 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Oct 10 09:48:31 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Oct 10 09:48:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:31.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:32 compute-1 ceph-mon[79167]: 7.8 scrub starts
Oct 10 09:48:32 compute-1 ceph-mon[79167]: 7.8 scrub ok
Oct 10 09:48:32 compute-1 ceph-mon[79167]: 8.12 scrub starts
Oct 10 09:48:32 compute-1 ceph-mon[79167]: 8.12 scrub ok
Oct 10 09:48:32 compute-1 ceph-mon[79167]: 9.1d scrub starts
Oct 10 09:48:32 compute-1 ceph-mon[79167]: 9.1d scrub ok
Oct 10 09:48:32 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 10 09:48:32 compute-1 ceph-mon[79167]: osdmap e96: 3 total, 3 up, 3 in
Oct 10 09:48:32 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:48:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e97 e97: 3 total, 3 up, 3 in
Oct 10 09:48:32 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 97 pg[10.c( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=72/72 les/c/f=73/73/0 sis=97) [1]/[2] r=-1 lpr=97 pi=[72,97)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:32 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 97 pg[10.b( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=95/65 les/c/f=96/66/0 sis=97) [1] r=0 lpr=97 pi=[65,97)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:32 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 97 pg[10.c( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=72/72 les/c/f=73/73/0 sis=97) [1]/[2] r=-1 lpr=97 pi=[72,97)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:32 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 97 pg[10.b( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=95/65 les/c/f=96/66/0 sis=97) [1] r=0 lpr=97 pi=[65,97)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:32 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 97 pg[10.1c( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=72/72 les/c/f=73/73/0 sis=97) [1]/[2] r=-1 lpr=97 pi=[72,97)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:32 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 97 pg[10.1c( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=72/72 les/c/f=73/73/0 sis=97) [1]/[2] r=-1 lpr=97 pi=[72,97)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:32 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 97 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=2 ec=56/45 lis/c=95/65 les/c/f=96/66/0 sis=97) [1] r=0 lpr=97 pi=[65,97)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:32 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 97 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=2 ec=56/45 lis/c=95/65 les/c/f=96/66/0 sis=97) [1] r=0 lpr=97 pi=[65,97)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:48:32 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Oct 10 09:48:32 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Oct 10 09:48:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:32 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388001d10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:33 compute-1 ceph-mon[79167]: 7.f scrub starts
Oct 10 09:48:33 compute-1 ceph-mon[79167]: 7.f scrub ok
Oct 10 09:48:33 compute-1 ceph-mon[79167]: 8.19 scrub starts
Oct 10 09:48:33 compute-1 ceph-mon[79167]: 8.19 scrub ok
Oct 10 09:48:33 compute-1 ceph-mon[79167]: 11.16 deep-scrub starts
Oct 10 09:48:33 compute-1 ceph-mon[79167]: 11.16 deep-scrub ok
Oct 10 09:48:33 compute-1 ceph-mon[79167]: osdmap e97: 3 total, 3 up, 3 in
Oct 10 09:48:33 compute-1 ceph-mon[79167]: pgmap v24: 353 pgs: 353 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:33 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 10 09:48:33 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:33 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:33 compute-1 ceph-mon[79167]: Reconfiguring grafana.compute-0 (dependencies changed)...
Oct 10 09:48:33 compute-1 ceph-mon[79167]: Reconfiguring daemon grafana.compute-0 on compute-0
Oct 10 09:48:33 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e98 e98: 3 total, 3 up, 3 in
Oct 10 09:48:33 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 98 pg[10.1d( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=75/75 les/c/f=76/76/0 sis=98) [1] r=0 lpr=98 pi=[75,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:33 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 98 pg[10.d( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=75/75 les/c/f=76/76/0 sis=98) [1] r=0 lpr=98 pi=[75,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:33 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 98 pg[10.b( v 51'1091 (0'0,51'1091] local-lis/les=97/98 n=6 ec=56/45 lis/c=95/65 les/c/f=96/66/0 sis=97) [1] r=0 lpr=97 pi=[65,97)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:33 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 98 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=97/98 n=2 ec=56/45 lis/c=95/65 les/c/f=96/66/0 sis=97) [1] r=0 lpr=97 pi=[65,97)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:33 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004410 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:33 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e99 e99: 3 total, 3 up, 3 in
Oct 10 09:48:33 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 99 pg[10.c( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=97/72 les/c/f=98/73/0 sis=99) [1] r=0 lpr=99 pi=[72,99)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:33 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 99 pg[10.d( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=75/75 les/c/f=76/76/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[75,99)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:33 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 99 pg[10.d( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=75/75 les/c/f=76/76/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[75,99)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:33 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 99 pg[10.c( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=6 ec=56/45 lis/c=97/72 les/c/f=98/73/0 sis=99) [1] r=0 lpr=99 pi=[72,99)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:33 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 99 pg[10.1d( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=75/75 les/c/f=76/76/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[75,99)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:33 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 99 pg[10.1d( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=75/75 les/c/f=76/76/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[75,99)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:33 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 99 pg[10.1c( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=97/72 les/c/f=98/73/0 sis=99) [1] r=0 lpr=99 pi=[72,99)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:33 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 99 pg[10.1c( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=97/72 les/c/f=98/73/0 sis=99) [1] r=0 lpr=99 pi=[72,99)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:33 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:33.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:33 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Oct 10 09:48:33 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Oct 10 09:48:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:33.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:34 compute-1 sshd-session[88141]: Accepted publickey for zuul from 192.168.122.30 port 35846 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:48:34 compute-1 systemd-logind[789]: New session 39 of user zuul.
Oct 10 09:48:34 compute-1 systemd[1]: Started Session 39 of User zuul.
Oct 10 09:48:34 compute-1 sshd-session[88141]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:48:34 compute-1 ceph-mon[79167]: 7.4 scrub starts
Oct 10 09:48:34 compute-1 ceph-mon[79167]: 7.4 scrub ok
Oct 10 09:48:34 compute-1 ceph-mon[79167]: 11.1e scrub starts
Oct 10 09:48:34 compute-1 ceph-mon[79167]: 11.1e scrub ok
Oct 10 09:48:34 compute-1 ceph-mon[79167]: 12.3 scrub starts
Oct 10 09:48:34 compute-1 ceph-mon[79167]: 12.3 scrub ok
Oct 10 09:48:34 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 10 09:48:34 compute-1 ceph-mon[79167]: osdmap e98: 3 total, 3 up, 3 in
Oct 10 09:48:34 compute-1 ceph-mon[79167]: osdmap e99: 3 total, 3 up, 3 in
Oct 10 09:48:34 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e100 e100: 3 total, 3 up, 3 in
Oct 10 09:48:34 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 100 pg[10.c( v 51'1091 (0'0,51'1091] local-lis/les=99/100 n=6 ec=56/45 lis/c=97/72 les/c/f=98/73/0 sis=99) [1] r=0 lpr=99 pi=[72,99)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:34 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 100 pg[10.1c( v 51'1091 (0'0,51'1091] local-lis/les=99/100 n=5 ec=56/45 lis/c=97/72 les/c/f=98/73/0 sis=99) [1] r=0 lpr=99 pi=[72,99)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:34 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Oct 10 09:48:34 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Oct 10 09:48:34 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:34 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:34 compute-1 python3.9[88294]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 10 09:48:35 compute-1 ceph-mon[79167]: 7.3 deep-scrub starts
Oct 10 09:48:35 compute-1 ceph-mon[79167]: 7.3 deep-scrub ok
Oct 10 09:48:35 compute-1 ceph-mon[79167]: 9.11 scrub starts
Oct 10 09:48:35 compute-1 ceph-mon[79167]: 9.11 scrub ok
Oct 10 09:48:35 compute-1 ceph-mon[79167]: 11.17 scrub starts
Oct 10 09:48:35 compute-1 ceph-mon[79167]: 11.17 scrub ok
Oct 10 09:48:35 compute-1 ceph-mon[79167]: pgmap v27: 353 pgs: 2 remapped+peering, 2 peering, 349 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 1 objects/s recovering
Oct 10 09:48:35 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:35 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:35 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Oct 10 09:48:35 compute-1 ceph-mon[79167]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Oct 10 09:48:35 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Oct 10 09:48:35 compute-1 ceph-mon[79167]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Oct 10 09:48:35 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Oct 10 09:48:35 compute-1 ceph-mon[79167]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Oct 10 09:48:35 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:35 compute-1 ceph-mon[79167]: osdmap e100: 3 total, 3 up, 3 in
Oct 10 09:48:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:35 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388003680 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:35 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e101 e101: 3 total, 3 up, 3 in
Oct 10 09:48:35 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 101 pg[10.1d( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=99/75 les/c/f=100/76/0 sis=101) [1] r=0 lpr=101 pi=[75,101)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:35 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 101 pg[10.d( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=8 ec=56/45 lis/c=99/75 les/c/f=100/76/0 sis=101) [1] r=0 lpr=101 pi=[75,101)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:35 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 101 pg[10.d( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=8 ec=56/45 lis/c=99/75 les/c/f=100/76/0 sis=101) [1] r=0 lpr=101 pi=[75,101)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:35 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 101 pg[10.1d( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=99/75 les/c/f=100/76/0 sis=101) [1] r=0 lpr=101 pi=[75,101)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:35 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004410 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:35.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:35 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Oct 10 09:48:35 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Oct 10 09:48:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:35.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:36 compute-1 ceph-mon[79167]: 12.a scrub starts
Oct 10 09:48:36 compute-1 ceph-mon[79167]: 12.a scrub ok
Oct 10 09:48:36 compute-1 ceph-mon[79167]: 11.1a scrub starts
Oct 10 09:48:36 compute-1 ceph-mon[79167]: 11.1a scrub ok
Oct 10 09:48:36 compute-1 ceph-mon[79167]: 9.13 scrub starts
Oct 10 09:48:36 compute-1 ceph-mon[79167]: 9.13 scrub ok
Oct 10 09:48:36 compute-1 ceph-mon[79167]: osdmap e101: 3 total, 3 up, 3 in
Oct 10 09:48:36 compute-1 ceph-mon[79167]: 9.12 scrub starts
Oct 10 09:48:36 compute-1 ceph-mon[79167]: 9.12 scrub ok
Oct 10 09:48:36 compute-1 python3.9[88469]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:48:36 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e102 e102: 3 total, 3 up, 3 in
Oct 10 09:48:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 102 pg[10.d( v 51'1091 (0'0,51'1091] local-lis/les=101/102 n=8 ec=56/45 lis/c=99/75 les/c/f=100/76/0 sis=101) [1] r=0 lpr=101 pi=[75,101)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:36 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 102 pg[10.1d( v 51'1091 (0'0,51'1091] local-lis/les=101/102 n=5 ec=56/45 lis/c=99/75 les/c/f=100/76/0 sis=101) [1] r=0 lpr=101 pi=[75,101)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:36 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Oct 10 09:48:36 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Oct 10 09:48:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:36 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:48:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:37 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:37 compute-1 sudo[88624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knvsrscwuqflofoulydlfncgqxyqlvqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089716.8650215-94-102918432553466/AnsiballZ_command.py'
Oct 10 09:48:37 compute-1 sudo[88624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:48:37 compute-1 ceph-mon[79167]: 7.2 scrub starts
Oct 10 09:48:37 compute-1 ceph-mon[79167]: 7.2 scrub ok
Oct 10 09:48:37 compute-1 ceph-mon[79167]: 12.9 deep-scrub starts
Oct 10 09:48:37 compute-1 ceph-mon[79167]: 12.9 deep-scrub ok
Oct 10 09:48:37 compute-1 ceph-mon[79167]: pgmap v30: 353 pgs: 2 remapped+peering, 2 peering, 349 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 1 objects/s recovering
Oct 10 09:48:37 compute-1 ceph-mon[79167]: osdmap e102: 3 total, 3 up, 3 in
Oct 10 09:48:37 compute-1 ceph-mon[79167]: 11.14 scrub starts
Oct 10 09:48:37 compute-1 ceph-mon[79167]: 11.14 scrub ok
Oct 10 09:48:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:37 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388003680 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:37 compute-1 python3.9[88626]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:48:37 compute-1 sudo[88624]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:37.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:37 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.a scrub starts
Oct 10 09:48:37 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.a scrub ok
Oct 10 09:48:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:37.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:38 compute-1 sudo[88777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smspahgkiuikavatezrjsxdtvyzfweme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089718.016355-130-118724482689948/AnsiballZ_stat.py'
Oct 10 09:48:38 compute-1 sudo[88777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:48:38 compute-1 ceph-mon[79167]: 7.6 scrub starts
Oct 10 09:48:38 compute-1 ceph-mon[79167]: 7.6 scrub ok
Oct 10 09:48:38 compute-1 ceph-mon[79167]: 11.e scrub starts
Oct 10 09:48:38 compute-1 ceph-mon[79167]: 11.e scrub ok
Oct 10 09:48:38 compute-1 ceph-mon[79167]: 12.8 scrub starts
Oct 10 09:48:38 compute-1 ceph-mon[79167]: 12.8 scrub ok
Oct 10 09:48:38 compute-1 ceph-mon[79167]: 9.a scrub starts
Oct 10 09:48:38 compute-1 ceph-mon[79167]: 9.a scrub ok
Oct 10 09:48:38 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:38 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:38 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:48:38 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:48:38 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 10 09:48:38 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:38 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:38 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:48:38 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:48:38 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:48:38 compute-1 python3.9[88779]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:48:38 compute-1 sudo[88777]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:38 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.f scrub starts
Oct 10 09:48:38 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.f scrub ok
Oct 10 09:48:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:38 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004410 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:39 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e103 e103: 3 total, 3 up, 3 in
Oct 10 09:48:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:39 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:39 compute-1 ceph-mon[79167]: 11.3 scrub starts
Oct 10 09:48:39 compute-1 ceph-mon[79167]: 11.3 scrub ok
Oct 10 09:48:39 compute-1 ceph-mon[79167]: pgmap v32: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 8 B/s, 4 objects/s recovering
Oct 10 09:48:39 compute-1 ceph-mon[79167]: 12.b scrub starts
Oct 10 09:48:39 compute-1 ceph-mon[79167]: 12.b scrub ok
Oct 10 09:48:39 compute-1 ceph-mon[79167]: 9.f scrub starts
Oct 10 09:48:39 compute-1 ceph-mon[79167]: 9.f scrub ok
Oct 10 09:48:39 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 10 09:48:39 compute-1 ceph-mon[79167]: osdmap e103: 3 total, 3 up, 3 in
Oct 10 09:48:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:39 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:39 compute-1 sudo[88932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksrofpeezhrxbbfajeuuzfvwmrmsmxvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089719.1351802-163-254313392859775/AnsiballZ_file.py'
Oct 10 09:48:39 compute-1 sudo[88932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:48:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:39.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:39 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Oct 10 09:48:39 compute-1 python3.9[88934]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:48:39 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Oct 10 09:48:39 compute-1 sudo[88932]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:39.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:40 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e104 e104: 3 total, 3 up, 3 in
Oct 10 09:48:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 104 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=82/83 n=6 ec=56/45 lis/c=82/82 les/c/f=83/83/0 sis=104 pruub=10.961517334s) [2] r=-1 lpr=104 pi=[82,104)/1 crt=51'1091 mlcod 0'0 active pruub 236.460647583s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 104 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=82/83 n=6 ec=56/45 lis/c=82/82 les/c/f=83/83/0 sis=104 pruub=10.961463928s) [2] r=-1 lpr=104 pi=[82,104)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 236.460647583s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 104 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=82/83 n=5 ec=56/45 lis/c=82/82 les/c/f=83/83/0 sis=104 pruub=10.959306717s) [2] r=-1 lpr=104 pi=[82,104)/1 crt=51'1091 mlcod 0'0 active pruub 236.460693359s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:40 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 104 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=82/83 n=5 ec=56/45 lis/c=82/82 les/c/f=83/83/0 sis=104 pruub=10.959281921s) [2] r=-1 lpr=104 pi=[82,104)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 236.460693359s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:40 compute-1 ceph-mon[79167]: 8.d scrub starts
Oct 10 09:48:40 compute-1 ceph-mon[79167]: 8.d scrub ok
Oct 10 09:48:40 compute-1 ceph-mon[79167]: 12.6 scrub starts
Oct 10 09:48:40 compute-1 ceph-mon[79167]: 12.6 scrub ok
Oct 10 09:48:40 compute-1 ceph-mon[79167]: 9.6 scrub starts
Oct 10 09:48:40 compute-1 ceph-mon[79167]: 9.6 scrub ok
Oct 10 09:48:40 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 10 09:48:40 compute-1 python3.9[89084]: ansible-ansible.builtin.service_facts Invoked
Oct 10 09:48:40 compute-1 network[89101]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 09:48:40 compute-1 network[89102]: 'network-scripts' will be removed from distribution in near future.
Oct 10 09:48:40 compute-1 network[89103]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 09:48:40 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Oct 10 09:48:40 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:40 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388003680 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:40 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Oct 10 09:48:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:41 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004410 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:41 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:41 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e105 e105: 3 total, 3 up, 3 in
Oct 10 09:48:41 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 105 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=82/83 n=5 ec=56/45 lis/c=82/82 les/c/f=83/83/0 sis=105) [2]/[1] r=0 lpr=105 pi=[82,105)/1 crt=51'1091 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:41 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 105 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=82/83 n=6 ec=56/45 lis/c=82/82 les/c/f=83/83/0 sis=105) [2]/[1] r=0 lpr=105 pi=[82,105)/1 crt=51'1091 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:41 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 105 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=82/83 n=6 ec=56/45 lis/c=82/82 les/c/f=83/83/0 sis=105) [2]/[1] r=0 lpr=105 pi=[82,105)/1 crt=51'1091 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:41 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 105 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=82/83 n=5 ec=56/45 lis/c=82/82 les/c/f=83/83/0 sis=105) [2]/[1] r=0 lpr=105 pi=[82,105)/1 crt=51'1091 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:41 compute-1 ceph-mon[79167]: 8.16 scrub starts
Oct 10 09:48:41 compute-1 ceph-mon[79167]: 8.16 scrub ok
Oct 10 09:48:41 compute-1 ceph-mon[79167]: pgmap v34: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 7 B/s, 3 objects/s recovering
Oct 10 09:48:41 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 10 09:48:41 compute-1 ceph-mon[79167]: osdmap e104: 3 total, 3 up, 3 in
Oct 10 09:48:41 compute-1 ceph-mon[79167]: 12.c scrub starts
Oct 10 09:48:41 compute-1 ceph-mon[79167]: 12.c scrub ok
Oct 10 09:48:41 compute-1 ceph-mon[79167]: 8.1b scrub starts
Oct 10 09:48:41 compute-1 ceph-mon[79167]: 8.1b scrub ok
Oct 10 09:48:41 compute-1 ceph-mon[79167]: osdmap e105: 3 total, 3 up, 3 in
Oct 10 09:48:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:41.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:41 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Oct 10 09:48:41 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Oct 10 09:48:41 compute-1 systemd[82226]: Starting Mark boot as successful...
Oct 10 09:48:41 compute-1 systemd[82226]: Finished Mark boot as successful.
Oct 10 09:48:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000022s ======
Oct 10 09:48:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:41.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Oct 10 09:48:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:48:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e106 e106: 3 total, 3 up, 3 in
Oct 10 09:48:42 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 106 pg[10.10( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=2 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=106 pruub=8.241539955s) [2] r=-1 lpr=106 pi=[56,106)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 235.768768311s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:42 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 106 pg[10.10( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=2 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=106 pruub=8.240906715s) [2] r=-1 lpr=106 pi=[56,106)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 235.768768311s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:42 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 106 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=105/106 n=5 ec=56/45 lis/c=82/82 les/c/f=83/83/0 sis=105) [2]/[1] async=[2] r=0 lpr=105 pi=[82,105)/1 crt=51'1091 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:42 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 106 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=105/106 n=6 ec=56/45 lis/c=82/82 les/c/f=83/83/0 sis=105) [2]/[1] async=[2] r=0 lpr=105 pi=[82,105)/1 crt=51'1091 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:42 compute-1 ceph-mon[79167]: 12.18 scrub starts
Oct 10 09:48:42 compute-1 ceph-mon[79167]: 12.18 scrub ok
Oct 10 09:48:42 compute-1 ceph-mon[79167]: 7.9 scrub starts
Oct 10 09:48:42 compute-1 ceph-mon[79167]: 7.9 scrub ok
Oct 10 09:48:42 compute-1 ceph-mon[79167]: 8.18 scrub starts
Oct 10 09:48:42 compute-1 ceph-mon[79167]: 8.18 scrub ok
Oct 10 09:48:42 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct 10 09:48:42 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct 10 09:48:42 compute-1 ceph-mon[79167]: osdmap e106: 3 total, 3 up, 3 in
Oct 10 09:48:42 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Oct 10 09:48:42 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Oct 10 09:48:42 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:42 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:43 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388003680 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:43 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e107 e107: 3 total, 3 up, 3 in
Oct 10 09:48:43 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 107 pg[10.10( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=2 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=107) [2]/[1] r=0 lpr=107 pi=[56,107)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:43 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 107 pg[10.10( v 51'1091 (0'0,51'1091] local-lis/les=56/57 n=2 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=107) [2]/[1] r=0 lpr=107 pi=[56,107)/1 crt=51'1091 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:48:43 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 107 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=105/106 n=5 ec=56/45 lis/c=105/82 les/c/f=106/83/0 sis=107 pruub=15.092753410s) [2] async=[2] r=-1 lpr=107 pi=[82,107)/1 crt=51'1091 mlcod 51'1091 active pruub 243.531875610s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:43 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 107 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=105/106 n=5 ec=56/45 lis/c=105/82 les/c/f=106/83/0 sis=107 pruub=15.092676163s) [2] r=-1 lpr=107 pi=[82,107)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 243.531875610s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:43 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 107 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=105/106 n=6 ec=56/45 lis/c=105/82 les/c/f=106/83/0 sis=107 pruub=15.098297119s) [2] async=[2] r=-1 lpr=107 pi=[82,107)/1 crt=51'1091 mlcod 51'1091 active pruub 243.537689209s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:43 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 107 pg[10.f( v 51'1091 (0'0,51'1091] local-lis/les=105/106 n=6 ec=56/45 lis/c=105/82 les/c/f=106/83/0 sis=107 pruub=15.098007202s) [2] r=-1 lpr=107 pi=[82,107)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 243.537689209s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:43 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004410 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:43 compute-1 ceph-mon[79167]: 10.4 scrub starts
Oct 10 09:48:43 compute-1 ceph-mon[79167]: 10.4 scrub ok
Oct 10 09:48:43 compute-1 ceph-mon[79167]: pgmap v37: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 7 B/s, 3 objects/s recovering
Oct 10 09:48:43 compute-1 ceph-mon[79167]: 7.e scrub starts
Oct 10 09:48:43 compute-1 ceph-mon[79167]: 7.e scrub ok
Oct 10 09:48:43 compute-1 ceph-mon[79167]: 10.16 scrub starts
Oct 10 09:48:43 compute-1 ceph-mon[79167]: 10.16 scrub ok
Oct 10 09:48:43 compute-1 ceph-mon[79167]: osdmap e107: 3 total, 3 up, 3 in
Oct 10 09:48:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:43.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:43 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.e scrub starts
Oct 10 09:48:43 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.e scrub ok
Oct 10 09:48:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000022s ======
Oct 10 09:48:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:43.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Oct 10 09:48:44 compute-1 sudo[89169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 09:48:44 compute-1 sudo[89169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:44 compute-1 sudo[89169]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:44 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e108 e108: 3 total, 3 up, 3 in
Oct 10 09:48:44 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 108 pg[10.10( v 51'1091 (0'0,51'1091] local-lis/les=107/108 n=2 ec=56/45 lis/c=56/56 les/c/f=57/57/0 sis=107) [2]/[1] async=[2] r=0 lpr=107 pi=[56,107)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:48:44 compute-1 ceph-mon[79167]: 10.13 scrub starts
Oct 10 09:48:44 compute-1 ceph-mon[79167]: 10.13 scrub ok
Oct 10 09:48:44 compute-1 ceph-mon[79167]: 7.1e deep-scrub starts
Oct 10 09:48:44 compute-1 ceph-mon[79167]: 7.1e deep-scrub ok
Oct 10 09:48:44 compute-1 ceph-mon[79167]: 10.e scrub starts
Oct 10 09:48:44 compute-1 ceph-mon[79167]: 10.e scrub ok
Oct 10 09:48:44 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:44 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:48:44 compute-1 ceph-mon[79167]: osdmap e108: 3 total, 3 up, 3 in
Oct 10 09:48:44 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.c scrub starts
Oct 10 09:48:44 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.c scrub ok
Oct 10 09:48:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:44 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004410 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:45 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:45 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e109 e109: 3 total, 3 up, 3 in
Oct 10 09:48:45 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 109 pg[10.10( v 51'1091 (0'0,51'1091] local-lis/les=107/108 n=2 ec=56/45 lis/c=107/56 les/c/f=108/57/0 sis=109 pruub=15.000724792s) [2] async=[2] r=-1 lpr=109 pi=[56,109)/1 crt=51'1091 lcod 0'0 mlcod 0'0 active pruub 245.454376221s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:48:45 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 109 pg[10.10( v 51'1091 (0'0,51'1091] local-lis/les=107/108 n=2 ec=56/45 lis/c=107/56 les/c/f=108/57/0 sis=109 pruub=15.000662804s) [2] r=-1 lpr=109 pi=[56,109)/1 crt=51'1091 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 245.454376221s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:48:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:45 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:45 compute-1 ceph-mon[79167]: 10.14 scrub starts
Oct 10 09:48:45 compute-1 ceph-mon[79167]: 10.14 scrub ok
Oct 10 09:48:45 compute-1 ceph-mon[79167]: pgmap v40: 353 pgs: 2 remapped+peering, 351 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:45 compute-1 ceph-mon[79167]: 12.10 scrub starts
Oct 10 09:48:45 compute-1 ceph-mon[79167]: 12.10 scrub ok
Oct 10 09:48:45 compute-1 ceph-mon[79167]: 10.11 scrub starts
Oct 10 09:48:45 compute-1 ceph-mon[79167]: 10.c scrub starts
Oct 10 09:48:45 compute-1 ceph-mon[79167]: 10.c scrub ok
Oct 10 09:48:45 compute-1 ceph-mon[79167]: osdmap e109: 3 total, 3 up, 3 in
Oct 10 09:48:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:48:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:45.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:48:45 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.a deep-scrub starts
Oct 10 09:48:45 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.a deep-scrub ok
Oct 10 09:48:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:45.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:46 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e110 e110: 3 total, 3 up, 3 in
Oct 10 09:48:46 compute-1 python3.9[89395]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:48:46 compute-1 ceph-mon[79167]: 10.11 scrub ok
Oct 10 09:48:46 compute-1 ceph-mon[79167]: 12.e scrub starts
Oct 10 09:48:46 compute-1 ceph-mon[79167]: 12.e scrub ok
Oct 10 09:48:46 compute-1 ceph-mon[79167]: 10.3 scrub starts
Oct 10 09:48:46 compute-1 ceph-mon[79167]: 10.a deep-scrub starts
Oct 10 09:48:46 compute-1 ceph-mon[79167]: 10.a deep-scrub ok
Oct 10 09:48:46 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:48:46 compute-1 ceph-mon[79167]: osdmap e110: 3 total, 3 up, 3 in
Oct 10 09:48:46 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Oct 10 09:48:46 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:46 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:46 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Oct 10 09:48:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:48:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:47 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004410 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:47 compute-1 python3.9[89545]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:48:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:47 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388003680 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:47.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:47 compute-1 ceph-mon[79167]: 10.3 scrub ok
Oct 10 09:48:47 compute-1 ceph-mon[79167]: pgmap v43: 353 pgs: 2 remapped+peering, 351 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:47 compute-1 ceph-mon[79167]: 7.1b scrub starts
Oct 10 09:48:47 compute-1 ceph-mon[79167]: 7.1b scrub ok
Oct 10 09:48:47 compute-1 ceph-mon[79167]: 10.f scrub starts
Oct 10 09:48:47 compute-1 ceph-mon[79167]: 10.f scrub ok
Oct 10 09:48:47 compute-1 ceph-mon[79167]: 10.9 scrub starts
Oct 10 09:48:47 compute-1 ceph-mon[79167]: 10.9 scrub ok
Oct 10 09:48:47 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.b scrub starts
Oct 10 09:48:47 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.b scrub ok
Oct 10 09:48:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:47.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:48 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e111 e111: 3 total, 3 up, 3 in
Oct 10 09:48:48 compute-1 ceph-mon[79167]: 12.12 scrub starts
Oct 10 09:48:48 compute-1 ceph-mon[79167]: 12.12 scrub ok
Oct 10 09:48:48 compute-1 ceph-mon[79167]: 10.b scrub starts
Oct 10 09:48:48 compute-1 ceph-mon[79167]: 10.b scrub ok
Oct 10 09:48:48 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct 10 09:48:48 compute-1 python3.9[89700]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:48:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:48 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:48 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Oct 10 09:48:48 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Oct 10 09:48:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:49 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa37c003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:49 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004430 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:48:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:49.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:48:49 compute-1 ceph-mon[79167]: pgmap v45: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:49 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct 10 09:48:49 compute-1 ceph-mon[79167]: osdmap e111: 3 total, 3 up, 3 in
Oct 10 09:48:49 compute-1 ceph-mon[79167]: 7.10 scrub starts
Oct 10 09:48:49 compute-1 ceph-mon[79167]: 7.10 scrub ok
Oct 10 09:48:49 compute-1 ceph-mon[79167]: 10.6 scrub starts
Oct 10 09:48:49 compute-1 ceph-mon[79167]: 10.6 scrub ok
Oct 10 09:48:49 compute-1 sudo[89857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbfercteteqyylabliuhegutgzffhqcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089729.4052436-307-207573286841591/AnsiballZ_setup.py'
Oct 10 09:48:49 compute-1 sudo[89857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:48:49 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Oct 10 09:48:49 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Oct 10 09:48:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:49.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:50 compute-1 python3.9[89859]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:48:50 compute-1 sudo[89860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:48:50 compute-1 sudo[89860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:48:50 compute-1 sudo[89860]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:50 compute-1 sudo[89857]: pam_unix(sudo:session): session closed for user root
Oct 10 09:48:50 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e112 e112: 3 total, 3 up, 3 in
Oct 10 09:48:50 compute-1 ceph-mon[79167]: 12.19 scrub starts
Oct 10 09:48:50 compute-1 ceph-mon[79167]: 12.19 scrub ok
Oct 10 09:48:50 compute-1 ceph-mon[79167]: 10.19 scrub starts
Oct 10 09:48:50 compute-1 ceph-mon[79167]: 10.19 scrub ok
Oct 10 09:48:50 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct 10 09:48:50 compute-1 sudo[89966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-madfkgtnligiysjbfkmbwhdhjiygfykt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089729.4052436-307-207573286841591/AnsiballZ_dnf.py'
Oct 10 09:48:50 compute-1 sudo[89966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:48:50 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Oct 10 09:48:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:50 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388003680 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:50 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Oct 10 09:48:51 compute-1 python3.9[89968]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:48:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:51 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:51 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:48:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:51.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:48:51 compute-1 ceph-mon[79167]: pgmap v47: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:51 compute-1 ceph-mon[79167]: 7.18 scrub starts
Oct 10 09:48:51 compute-1 ceph-mon[79167]: 7.18 scrub ok
Oct 10 09:48:51 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct 10 09:48:51 compute-1 ceph-mon[79167]: osdmap e112: 3 total, 3 up, 3 in
Oct 10 09:48:51 compute-1 ceph-mon[79167]: 10.1a scrub starts
Oct 10 09:48:51 compute-1 ceph-mon[79167]: 10.1a scrub ok
Oct 10 09:48:51 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e113 e113: 3 total, 3 up, 3 in
Oct 10 09:48:51 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Oct 10 09:48:51 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Oct 10 09:48:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:51.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:48:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e114 e114: 3 total, 3 up, 3 in
Oct 10 09:48:52 compute-1 ceph-mon[79167]: 10.2 deep-scrub starts
Oct 10 09:48:52 compute-1 ceph-mon[79167]: 10.2 deep-scrub ok
Oct 10 09:48:52 compute-1 ceph-mon[79167]: osdmap e113: 3 total, 3 up, 3 in
Oct 10 09:48:52 compute-1 ceph-mon[79167]: 10.1c scrub starts
Oct 10 09:48:52 compute-1 ceph-mon[79167]: 10.1c scrub ok
Oct 10 09:48:52 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct 10 09:48:52 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:52 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:52 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Oct 10 09:48:52 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Oct 10 09:48:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:53 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388003680 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:53 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e115 e115: 3 total, 3 up, 3 in
Oct 10 09:48:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:53 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388003680 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:53.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:53 compute-1 ceph-mon[79167]: pgmap v50: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:53 compute-1 ceph-mon[79167]: 10.5 deep-scrub starts
Oct 10 09:48:53 compute-1 ceph-mon[79167]: 10.5 deep-scrub ok
Oct 10 09:48:53 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct 10 09:48:53 compute-1 ceph-mon[79167]: osdmap e114: 3 total, 3 up, 3 in
Oct 10 09:48:53 compute-1 ceph-mon[79167]: 10.1d scrub starts
Oct 10 09:48:53 compute-1 ceph-mon[79167]: 10.1d scrub ok
Oct 10 09:48:53 compute-1 ceph-mon[79167]: osdmap e115: 3 total, 3 up, 3 in
Oct 10 09:48:53 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Oct 10 09:48:53 compute-1 ceph-osd[76867]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Oct 10 09:48:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:53.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:54 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e116 e116: 3 total, 3 up, 3 in
Oct 10 09:48:54 compute-1 ceph-mon[79167]: 10.1e scrub starts
Oct 10 09:48:54 compute-1 ceph-mon[79167]: 10.1e scrub ok
Oct 10 09:48:54 compute-1 ceph-mon[79167]: osdmap e116: 3 total, 3 up, 3 in
Oct 10 09:48:54 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:54 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a00044e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:55 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a00044e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:55 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e117 e117: 3 total, 3 up, 3 in
Oct 10 09:48:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:55 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378000b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:55.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:55 compute-1 ceph-mon[79167]: pgmap v53: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:55 compute-1 ceph-mon[79167]: osdmap e117: 3 total, 3 up, 3 in
Oct 10 09:48:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:48:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:55.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:48:56 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e118 e118: 3 total, 3 up, 3 in
Oct 10 09:48:56 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:56 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3900014d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 10 09:48:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:57 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a00044e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:57 compute-1 ceph-mon[79167]: pgmap v56: 353 pgs: 1 remapped+peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:57 compute-1 ceph-mon[79167]: osdmap e118: 3 total, 3 up, 3 in
Oct 10 09:48:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:57 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a00044e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:48:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:57.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:48:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:48:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:48:58.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:48:58 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct 10 09:48:58 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e119 e119: 3 total, 3 up, 3 in
Oct 10 09:48:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:58 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3780016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:59 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3900014d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:48:59 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a00044e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:48:59 compute-1 ceph-mon[79167]: pgmap v58: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:48:59 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct 10 09:48:59 compute-1 ceph-mon[79167]: osdmap e119: 3 total, 3 up, 3 in
Oct 10 09:48:59 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e120 e120: 3 total, 3 up, 3 in
Oct 10 09:48:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:48:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:48:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:48:59.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:49:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:00.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:00 compute-1 ceph-mon[79167]: osdmap e120: 3 total, 3 up, 3 in
Oct 10 09:49:00 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct 10 09:49:00 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e121 e121: 3 total, 3 up, 3 in
Oct 10 09:49:00 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:00 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:01 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3780016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:01 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3900014d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:01 compute-1 ceph-mon[79167]: pgmap v61: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:49:01 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct 10 09:49:01 compute-1 ceph-mon[79167]: osdmap e121: 3 total, 3 up, 3 in
Oct 10 09:49:01 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:49:01 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e122 e122: 3 total, 3 up, 3 in
Oct 10 09:49:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:01.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:49:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:02.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:49:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:49:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e123 e123: 3 total, 3 up, 3 in
Oct 10 09:49:02 compute-1 ceph-mon[79167]: osdmap e122: 3 total, 3 up, 3 in
Oct 10 09:49:02 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct 10 09:49:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:02 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a00044e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:03 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:03 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3780016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:03.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:03 compute-1 ceph-mon[79167]: pgmap v64: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:49:03 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct 10 09:49:03 compute-1 ceph-mon[79167]: osdmap e123: 3 total, 3 up, 3 in
Oct 10 09:49:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:49:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:04.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:49:04 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:04 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3900014d0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:05 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a00044e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:05 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:05.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:05 compute-1 ceph-mon[79167]: pgmap v66: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Oct 10 09:49:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:49:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:06.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:49:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:06 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:49:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:07 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390001670 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:07 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a00044e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:07.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:07 compute-1 ceph-mon[79167]: pgmap v67: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct 10 09:49:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:08.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:08 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct 10 09:49:08 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e124 e124: 3 total, 3 up, 3 in
Oct 10 09:49:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:08 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378002b10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:09.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:09 compute-1 ceph-mon[79167]: pgmap v68: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 403 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering
Oct 10 09:49:09 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct 10 09:49:09 compute-1 ceph-mon[79167]: osdmap e124: 3 total, 3 up, 3 in
Oct 10 09:49:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:10.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:10 compute-1 sudo[90098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:49:10 compute-1 sudo[90098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:49:10 compute-1 sudo[90098]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:10 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct 10 09:49:10 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e125 e125: 3 total, 3 up, 3 in
Oct 10 09:49:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:10 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390003b50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:11 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004500 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:11 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388004390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:11.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:11 compute-1 ceph-mon[79167]: pgmap v70: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering
Oct 10 09:49:11 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct 10 09:49:11 compute-1 ceph-mon[79167]: osdmap e125: 3 total, 3 up, 3 in
Oct 10 09:49:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:12.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:49:12 compute-1 ceph-mon[79167]: pgmap v72: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Oct 10 09:49:12 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct 10 09:49:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e126 e126: 3 total, 3 up, 3 in
Oct 10 09:49:12 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 126 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=90/91 n=7 ec=56/45 lis/c=90/90 les/c/f=91/91/0 sis=126 pruub=15.568570137s) [0] r=-1 lpr=126 pi=[90,126)/1 crt=51'1091 mlcod 0'0 active pruub 273.443084717s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:49:12 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 126 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=90/91 n=7 ec=56/45 lis/c=90/90 les/c/f=91/91/0 sis=126 pruub=15.568256378s) [0] r=-1 lpr=126 pi=[90,126)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 273.443084717s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:49:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:12 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:13 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390003b50 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:13 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e127 e127: 3 total, 3 up, 3 in
Oct 10 09:49:13 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 127 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=90/91 n=7 ec=56/45 lis/c=90/90 les/c/f=91/91/0 sis=127) [0]/[1] r=0 lpr=127 pi=[90,127)/1 crt=51'1091 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:49:13 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 127 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=90/91 n=7 ec=56/45 lis/c=90/90 les/c/f=91/91/0 sis=127) [0]/[1] r=0 lpr=127 pi=[90,127)/1 crt=51'1091 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:49:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:13 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004520 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:13.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:13 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct 10 09:49:13 compute-1 ceph-mon[79167]: osdmap e126: 3 total, 3 up, 3 in
Oct 10 09:49:13 compute-1 ceph-mon[79167]: osdmap e127: 3 total, 3 up, 3 in
Oct 10 09:49:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:14.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:14 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e128 e128: 3 total, 3 up, 3 in
Oct 10 09:49:14 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 128 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=127/128 n=7 ec=56/45 lis/c=90/90 les/c/f=91/91/0 sis=127) [0]/[1] async=[0] r=0 lpr=127 pi=[90,127)/1 crt=51'1091 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:49:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:14 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388004390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:14 compute-1 ceph-mon[79167]: pgmap v75: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:49:14 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct 10 09:49:14 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct 10 09:49:14 compute-1 ceph-mon[79167]: osdmap e128: 3 total, 3 up, 3 in
Oct 10 09:49:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:15 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:15 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e129 e129: 3 total, 3 up, 3 in
Oct 10 09:49:15 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 129 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=127/128 n=7 ec=56/45 lis/c=127/90 les/c/f=128/91/0 sis=129 pruub=15.001054764s) [0] async=[0] r=-1 lpr=129 pi=[90,129)/1 crt=51'1091 mlcod 51'1091 active pruub 275.469635010s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:49:15 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 129 pg[10.19( v 51'1091 (0'0,51'1091] local-lis/les=127/128 n=7 ec=56/45 lis/c=127/90 les/c/f=128/91/0 sis=129 pruub=15.000988007s) [0] r=-1 lpr=129 pi=[90,129)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 275.469635010s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:49:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:15 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004470 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:15.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:16.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:16 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/094916 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 09:49:16 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e130 e130: 3 total, 3 up, 3 in
Oct 10 09:49:16 compute-1 ceph-mon[79167]: osdmap e129: 3 total, 3 up, 3 in
Oct 10 09:49:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct 10 09:49:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:49:16 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 130 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=97/98 n=2 ec=56/45 lis/c=97/97 les/c/f=98/98/0 sis=130 pruub=12.303937912s) [0] r=-1 lpr=130 pi=[97,130)/1 crt=51'1091 mlcod 0'0 active pruub 274.105194092s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:49:16 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 130 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=97/98 n=2 ec=56/45 lis/c=97/97 les/c/f=98/98/0 sis=130 pruub=12.303892136s) [0] r=-1 lpr=130 pi=[97,130)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 274.105194092s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:49:16 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:16 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004540 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:49:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:17 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388004390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:17 compute-1 ceph-mon[79167]: pgmap v78: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:49:17 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct 10 09:49:17 compute-1 ceph-mon[79167]: osdmap e130: 3 total, 3 up, 3 in
Oct 10 09:49:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e131 e131: 3 total, 3 up, 3 in
Oct 10 09:49:17 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 131 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=97/98 n=2 ec=56/45 lis/c=97/97 les/c/f=98/98/0 sis=131) [0]/[1] r=0 lpr=131 pi=[97,131)/1 crt=51'1091 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:49:17 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 131 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=97/98 n=2 ec=56/45 lis/c=97/97 les/c/f=98/98/0 sis=131) [0]/[1] r=0 lpr=131 pi=[97,131)/1 crt=51'1091 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:49:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:17 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:17.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:18.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:18 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e132 e132: 3 total, 3 up, 3 in
Oct 10 09:49:18 compute-1 ceph-mon[79167]: osdmap e131: 3 total, 3 up, 3 in
Oct 10 09:49:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:18 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004470 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:19 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004560 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:19 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 132 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=131/132 n=2 ec=56/45 lis/c=97/97 les/c/f=98/98/0 sis=131) [0]/[1] async=[0] r=0 lpr=131 pi=[97,131)/1 crt=51'1091 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:49:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:19 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388004390 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:19 compute-1 ceph-mon[79167]: pgmap v81: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 0 B/s, 1 objects/s recovering
Oct 10 09:49:19 compute-1 ceph-mon[79167]: osdmap e132: 3 total, 3 up, 3 in
Oct 10 09:49:19 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e133 e133: 3 total, 3 up, 3 in
Oct 10 09:49:19 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 133 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=131/132 n=2 ec=56/45 lis/c=131/97 les/c/f=132/98/0 sis=133 pruub=15.850214958s) [0] async=[0] r=-1 lpr=133 pi=[97,133)/1 crt=51'1091 mlcod 51'1091 active pruub 280.424163818s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:49:19 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 133 pg[10.1b( v 51'1091 (0'0,51'1091] local-lis/les=131/132 n=2 ec=56/45 lis/c=131/97 les/c/f=132/98/0 sis=133 pruub=15.850142479s) [0] r=-1 lpr=133 pi=[97,133)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 280.424163818s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:49:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:19.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:20.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:20 compute-1 ceph-mon[79167]: osdmap e133: 3 total, 3 up, 3 in
Oct 10 09:49:20 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e134 e134: 3 total, 3 up, 3 in
Oct 10 09:49:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:20 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:21 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004470 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:21 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/094921 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 09:49:21 compute-1 ceph-mon[79167]: pgmap v84: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 0 B/s, 1 objects/s recovering
Oct 10 09:49:21 compute-1 ceph-mon[79167]: osdmap e134: 3 total, 3 up, 3 in
Oct 10 09:49:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:49:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:21.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:49:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:22.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:49:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:22 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:23 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003c10 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:23 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004470 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:23 compute-1 ceph-mon[79167]: pgmap v86: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct 10 09:49:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:23.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:24.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:24 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct 10 09:49:24 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e135 e135: 3 total, 3 up, 3 in
Oct 10 09:49:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:24 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:25 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:25 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 09:49:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:25 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:25 compute-1 ceph-mon[79167]: pgmap v87: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 0 objects/s recovering
Oct 10 09:49:25 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct 10 09:49:25 compute-1 ceph-mon[79167]: osdmap e135: 3 total, 3 up, 3 in
Oct 10 09:49:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:25.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:49:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:26.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:49:26 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct 10 09:49:26 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e136 e136: 3 total, 3 up, 3 in
Oct 10 09:49:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:26 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004470 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:49:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:27 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:27 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:27.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:27 compute-1 ceph-mon[79167]: pgmap v89: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 33 B/s, 0 objects/s recovering
Oct 10 09:49:27 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct 10 09:49:27 compute-1 ceph-mon[79167]: osdmap e136: 3 total, 3 up, 3 in
Oct 10 09:49:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:49:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:28.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:49:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:28 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 09:49:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:28 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 09:49:28 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e137 e137: 3 total, 3 up, 3 in
Oct 10 09:49:28 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 137 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=80/81 n=5 ec=56/45 lis/c=80/80 les/c/f=81/81/0 sis=137 pruub=8.719421387s) [2] r=-1 lpr=137 pi=[80,137)/1 crt=51'1091 mlcod 0'0 active pruub 282.430541992s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:49:28 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 137 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=80/81 n=5 ec=56/45 lis/c=80/80 les/c/f=81/81/0 sis=137 pruub=8.719359398s) [2] r=-1 lpr=137 pi=[80,137)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 282.430541992s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:49:28 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct 10 09:49:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:28 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:29 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004470 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:29 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:29.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:29 compute-1 ceph-mon[79167]: pgmap v91: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 674 B/s wr, 1 op/s; 28 B/s, 0 objects/s recovering
Oct 10 09:49:29 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct 10 09:49:29 compute-1 ceph-mon[79167]: osdmap e137: 3 total, 3 up, 3 in
Oct 10 09:49:29 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e138 e138: 3 total, 3 up, 3 in
Oct 10 09:49:29 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 138 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=80/81 n=5 ec=56/45 lis/c=80/80 les/c/f=81/81/0 sis=138) [2]/[1] r=0 lpr=138 pi=[80,138)/1 crt=51'1091 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:49:29 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 138 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=80/81 n=5 ec=56/45 lis/c=80/80 les/c/f=81/81/0 sis=138) [2]/[1] r=0 lpr=138 pi=[80,138)/1 crt=51'1091 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 10 09:49:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:49:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:30.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:49:30 compute-1 sudo[90162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:49:30 compute-1 sudo[90162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:49:30 compute-1 sudo[90162]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:30 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e139 e139: 3 total, 3 up, 3 in
Oct 10 09:49:30 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 139 pg[10.1f( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=107/107 les/c/f=108/108/0 sis=139) [1] r=0 lpr=139 pi=[107,139)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:49:30 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 139 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=138/139 n=5 ec=56/45 lis/c=80/80 les/c/f=81/81/0 sis=138) [2]/[1] async=[2] r=0 lpr=138 pi=[80,138)/1 crt=51'1091 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:49:30 compute-1 ceph-mon[79167]: osdmap e138: 3 total, 3 up, 3 in
Oct 10 09:49:30 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 10 09:49:30 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:30 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:31 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:31 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004470 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:31 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 09:49:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:31.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:31 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e140 e140: 3 total, 3 up, 3 in
Oct 10 09:49:31 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 140 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=138/139 n=5 ec=56/45 lis/c=138/80 les/c/f=139/81/0 sis=140 pruub=15.002747536s) [2] async=[2] r=-1 lpr=140 pi=[80,140)/1 crt=51'1091 mlcod 51'1091 active pruub 291.782257080s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:49:31 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 140 pg[10.1e( v 51'1091 (0'0,51'1091] local-lis/les=138/139 n=5 ec=56/45 lis/c=138/80 les/c/f=139/81/0 sis=140 pruub=15.002635002s) [2] r=-1 lpr=140 pi=[80,140)/1 crt=51'1091 mlcod 0'0 unknown NOTIFY pruub 291.782257080s@ mbc={}] state<Start>: transitioning to Stray
Oct 10 09:49:31 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 140 pg[10.1f( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=107/107 les/c/f=108/108/0 sis=140) [1]/[2] r=-1 lpr=140 pi=[107,140)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:49:31 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 140 pg[10.1f( empty local-lis/les=0/0 n=0 ec=56/45 lis/c=107/107 les/c/f=108/108/0 sis=140) [1]/[2] r=-1 lpr=140 pi=[107,140)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 10 09:49:31 compute-1 ceph-mon[79167]: pgmap v94: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 925 B/s wr, 2 op/s
Oct 10 09:49:31 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 10 09:49:31 compute-1 ceph-mon[79167]: osdmap e139: 3 total, 3 up, 3 in
Oct 10 09:49:31 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:49:31 compute-1 ceph-mon[79167]: osdmap e140: 3 total, 3 up, 3 in
Oct 10 09:49:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:32.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:32 compute-1 sudo[89966]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:49:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e141 e141: 3 total, 3 up, 3 in
Oct 10 09:49:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:32 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:33 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:33 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e142 e142: 3 total, 3 up, 3 in
Oct 10 09:49:33 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 142 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=140/107 les/c/f=141/108/0 sis=142) [1] r=0 lpr=142 pi=[107,142)/1 luod=0'0 crt=51'1091 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 10 09:49:33 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 142 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=0/0 n=5 ec=56/45 lis/c=140/107 les/c/f=141/108/0 sis=142) [1] r=0 lpr=142 pi=[107,142)/1 crt=51'1091 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 10 09:49:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:33 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:33.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:33 compute-1 ceph-mon[79167]: pgmap v97: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.7 KiB/s wr, 5 op/s
Oct 10 09:49:33 compute-1 ceph-mon[79167]: osdmap e141: 3 total, 3 up, 3 in
Oct 10 09:49:33 compute-1 ceph-mon[79167]: osdmap e142: 3 total, 3 up, 3 in
Oct 10 09:49:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:34.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:34 compute-1 sudo[90338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmgwknbedotyekpmsffoedkzjhofrrlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089773.7814136-343-174845765641027/AnsiballZ_command.py'
Oct 10 09:49:34 compute-1 sudo[90338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:34 compute-1 python3.9[90340]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:49:34 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 e143: 3 total, 3 up, 3 in
Oct 10 09:49:34 compute-1 ceph-osd[76867]: osd.1 pg_epoch: 143 pg[10.1f( v 51'1091 (0'0,51'1091] local-lis/les=142/143 n=5 ec=56/45 lis/c=140/107 les/c/f=141/108/0 sis=142) [1] r=0 lpr=142 pi=[107,142)/1 crt=51'1091 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 10 09:49:34 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:34 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 09:49:34 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:34 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:35 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:35 compute-1 sudo[90338]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:35 compute-1 ceph-mon[79167]: pgmap v100: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.7 KiB/s wr, 5 op/s; 0 B/s, 1 objects/s recovering
Oct 10 09:49:35 compute-1 ceph-mon[79167]: osdmap e143: 3 total, 3 up, 3 in
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:49:35.527240) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089775527275, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 3142, "num_deletes": 252, "total_data_size": 9637706, "memory_usage": 9778848, "flush_reason": "Manual Compaction"}
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089775573932, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 6118759, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7558, "largest_seqno": 10695, "table_properties": {"data_size": 6104678, "index_size": 9103, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3909, "raw_key_size": 34854, "raw_average_key_size": 22, "raw_value_size": 6074200, "raw_average_value_size": 3949, "num_data_blocks": 395, "num_entries": 1538, "num_filter_entries": 1538, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089675, "oldest_key_time": 1760089675, "file_creation_time": 1760089775, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 46767 microseconds, and 21081 cpu microseconds.
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:49:35.574000) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 6118759 bytes OK
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:49:35.574027) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:49:35.575524) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:49:35.575545) EVENT_LOG_v1 {"time_micros": 1760089775575538, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:49:35.575569) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 9622516, prev total WAL file size 9622516, number of live WAL files 2.
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:49:35.578574) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(5975KB)], [18(10MB)]
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089775578616, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 17453906, "oldest_snapshot_seqno": -1}
Oct 10 09:49:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:35 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388004390 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 4078 keys, 13426087 bytes, temperature: kUnknown
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089775673662, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 13426087, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13393583, "index_size": 21194, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10245, "raw_key_size": 104167, "raw_average_key_size": 25, "raw_value_size": 13313676, "raw_average_value_size": 3264, "num_data_blocks": 912, "num_entries": 4078, "num_filter_entries": 4078, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760089775, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:49:35.673998) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 13426087 bytes
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:49:35.675294) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.4 rd, 141.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(5.8, 10.8 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(5.0) write-amplify(2.2) OK, records in: 4616, records dropped: 538 output_compression: NoCompression
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:49:35.675316) EVENT_LOG_v1 {"time_micros": 1760089775675304, "job": 8, "event": "compaction_finished", "compaction_time_micros": 95148, "compaction_time_cpu_micros": 46203, "output_level": 6, "num_output_files": 1, "total_output_size": 13426087, "num_input_records": 4616, "num_output_records": 4078, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089775676450, "job": 8, "event": "table_file_deletion", "file_number": 20}
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089775678604, "job": 8, "event": "table_file_deletion", "file_number": 18}
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:49:35.578527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:49:35.678635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:49:35.678640) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:49:35.678642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:49:35.678643) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:49:35 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:49:35.678646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:49:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:49:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:35.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:49:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:36.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:36 compute-1 sudo[90626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vubhfvsliwptjgyfzhhmfaxkrmhtqigl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089775.5824983-367-192248739922808/AnsiballZ_selinux.py'
Oct 10 09:49:36 compute-1 sudo[90626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:36 compute-1 python3.9[90628]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 10 09:49:36 compute-1 sudo[90626]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:36 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:49:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:37 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004470 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:37 compute-1 sudo[90779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjdlxdacjwkpwspwsjysizdmrrugifgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089777.016112-400-164439756154983/AnsiballZ_command.py'
Oct 10 09:49:37 compute-1 sudo[90779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:37 compute-1 ceph-mon[79167]: pgmap v102: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.6 KiB/s wr, 5 op/s; 0 B/s, 1 objects/s recovering
Oct 10 09:49:37 compute-1 python3.9[90781]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 10 09:49:37 compute-1 sudo[90779]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:37 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:37 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 09:49:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:37 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 09:49:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:49:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:37.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:49:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:49:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:38.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:49:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/094938 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 09:49:38 compute-1 sudo[90931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjkspeqxcykmnmcazpdlqglzhxatewys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089777.8650198-424-87238266615858/AnsiballZ_file.py'
Oct 10 09:49:38 compute-1 sudo[90931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:38 compute-1 python3.9[90933]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:49:38 compute-1 sudo[90931]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:38 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388004390 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:39 compute-1 sudo[91084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyuusmdbxeresdwhekgksxkfbbpnflnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089778.8294396-448-207488737299749/AnsiballZ_mount.py'
Oct 10 09:49:39 compute-1 sudo[91084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:39 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390004470 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:39 compute-1 python3.9[91086]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 10 09:49:39 compute-1 sudo[91084]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:39 compute-1 ceph-mon[79167]: pgmap v103: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s; 18 B/s, 1 objects/s recovering
Oct 10 09:49:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:39 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:39.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:40.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:40 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:40 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 09:49:40 compute-1 sudo[91236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oszrzhkawyfziuumxchclexbypgszvdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089780.465242-532-143074137182323/AnsiballZ_file.py'
Oct 10 09:49:40 compute-1 sudo[91236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:40 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:40 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:41 compute-1 python3.9[91238]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:49:41 compute-1 sudo[91236]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:41 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:41 compute-1 ceph-mon[79167]: pgmap v104: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 829 B/s wr, 2 op/s; 14 B/s, 0 objects/s recovering
Oct 10 09:49:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:41 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003430 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:41 compute-1 sudo[91392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqnvcicgiyfnmogyrpmydhkvksgsuslj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089781.253526-556-73985385732908/AnsiballZ_stat.py'
Oct 10 09:49:41 compute-1 sudo[91392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:41.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:41 compute-1 python3.9[91394]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:49:41 compute-1 sudo[91392]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:42.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:42 compute-1 sudo[91470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atpontlrppgdbtuxpppdpsfydajarfxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089781.253526-556-73985385732908/AnsiballZ_file.py'
Oct 10 09:49:42 compute-1 sudo[91470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:49:42 compute-1 python3.9[91472]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:49:42 compute-1 sudo[91470]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:42 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:42 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:43 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370000d90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:43 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370000d90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:43 compute-1 ceph-mon[79167]: pgmap v105: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.0 KiB/s wr, 4 op/s; 12 B/s, 0 objects/s recovering
Oct 10 09:49:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/094943 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 09:49:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:43.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:43 compute-1 sudo[91623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yystvdulqfgigqtdzfjiewuolpqiqymj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089783.3455102-628-72646999545581/AnsiballZ_getent.py'
Oct 10 09:49:43 compute-1 sudo[91623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:44 compute-1 python3.9[91625]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 10 09:49:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:49:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:44.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:49:44 compute-1 sudo[91623]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:44 compute-1 sudo[91674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:49:44 compute-1 sudo[91674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:49:44 compute-1 sudo[91674]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:44 compute-1 sudo[91728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 09:49:44 compute-1 sudo[91728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:49:44 compute-1 sudo[91826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amafzyzfhgrifplijjbwgaodzxujawgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089784.4665582-658-27408796014529/AnsiballZ_getent.py'
Oct 10 09:49:44 compute-1 sudo[91826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:44 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384003430 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:45 compute-1 python3.9[91829]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 10 09:49:45 compute-1 sudo[91826]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:45 compute-1 sudo[91728]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:45 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:45 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:45 compute-1 ceph-mon[79167]: pgmap v106: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 921 B/s wr, 3 op/s; 10 B/s, 0 objects/s recovering
Oct 10 09:49:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:45.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:45 compute-1 sudo[92011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nisjgtfewsbcrbrilffpvzeueepdmqdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089785.285362-682-262212651692096/AnsiballZ_group.py'
Oct 10 09:49:45 compute-1 sudo[92011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:46 compute-1 python3.9[92013]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 10 09:49:46 compute-1 sudo[92011]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:46.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:46 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:49:46 compute-1 sudo[92163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxefsexgabqulzepopjysgypitebubsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089786.3704062-709-225227288187629/AnsiballZ_file.py'
Oct 10 09:49:46 compute-1 sudo[92163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:46 compute-1 python3.9[92165]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 10 09:49:46 compute-1 sudo[92163]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:46 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:46 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:49:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:47 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:47 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:47 compute-1 ceph-mon[79167]: pgmap v107: 353 pgs: 353 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 785 B/s wr, 3 op/s; 9 B/s, 0 objects/s recovering
Oct 10 09:49:47 compute-1 sudo[92316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xncrmrlytmoiuwnzlsxjpwovqziwtyof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089787.3455105-742-143376989177889/AnsiballZ_dnf.py'
Oct 10 09:49:47 compute-1 sudo[92316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:49:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:47.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:49:47 compute-1 python3.9[92318]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:49:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:48.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:48 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:49 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:49:49 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:49:49 compute-1 ceph-mon[79167]: pgmap v108: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 1023 B/s wr, 3 op/s; 9 B/s, 0 objects/s recovering
Oct 10 09:49:49 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:49:49 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:49:49 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:49:49 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:49:49 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:49:49 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:49:49 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:49:49 compute-1 sudo[92316]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:49 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:49 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:49 compute-1 sudo[92470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyxpodgqufajndmxbpsfrhmgtxawypxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089789.3534877-767-190592197297612/AnsiballZ_file.py'
Oct 10 09:49:49 compute-1 sudo[92470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:49.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:49 compute-1 python3.9[92472]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:49:49 compute-1 sudo[92470]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:50.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:50 compute-1 ceph-mon[79167]: mgrmap e33: compute-0.xkdepb(active, since 92s), standbys: compute-1.rfugxc, compute-2.gkrssp
Oct 10 09:49:50 compute-1 sudo[92645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awsdfcvhkhbqmuaaciubfclfyhjzqnwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089790.15782-791-87302646817066/AnsiballZ_stat.py'
Oct 10 09:49:50 compute-1 sudo[92602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:49:50 compute-1 sudo[92645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:50 compute-1 sudo[92602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:49:50 compute-1 sudo[92602]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:50 compute-1 python3.9[92648]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:49:50 compute-1 sudo[92645]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:50 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:51 compute-1 sudo[92725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmmtuexvvypphxxjnaccrlxpnrheyyay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089790.15782-791-87302646817066/AnsiballZ_file.py'
Oct 10 09:49:51 compute-1 sudo[92725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:51 compute-1 ceph-mon[79167]: pgmap v109: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 511 B/s wr, 1 op/s
Oct 10 09:49:51 compute-1 python3.9[92727]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:49:51 compute-1 sudo[92725]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:51 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370001d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:51 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003fb0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:51.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:51 compute-1 sudo[92878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyepypetxkdswiefcbtdlstbmkamzeta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089791.5773854-829-211845045359966/AnsiballZ_stat.py'
Oct 10 09:49:51 compute-1 sudo[92878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:52.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:52 compute-1 python3.9[92880]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:49:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:49:52 compute-1 sudo[92878]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:52 compute-1 sudo[92956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xysaibitqrdvvargmcehqcsplawpqhbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089791.5773854-829-211845045359966/AnsiballZ_file.py'
Oct 10 09:49:52 compute-1 sudo[92956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:52 compute-1 python3.9[92958]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:49:52 compute-1 sudo[92956]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:52 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:52 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388004390 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:53 compute-1 ceph-mon[79167]: pgmap v110: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 511 B/s wr, 1 op/s
Oct 10 09:49:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:53 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:53 compute-1 sudo[93109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exwyypxhlpomvoqthqgimipuoprmbphr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089793.1599712-874-271235595791172/AnsiballZ_dnf.py'
Oct 10 09:49:53 compute-1 sudo[93109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:53 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:53 compute-1 sudo[93112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 09:49:53 compute-1 sudo[93112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:49:53 compute-1 sudo[93112]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:53.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:53 compute-1 python3.9[93111]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:49:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:49:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:54.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:49:54 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:49:54 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:49:54 compute-1 sudo[93109]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:54 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:54 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003fd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:55 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388004390 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:55 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003fd0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:55 compute-1 ceph-mon[79167]: pgmap v111: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Oct 10 09:49:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:55.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:55 compute-1 python3.9[93288]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:49:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:56.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:56 compute-1 python3.9[93440]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 10 09:49:56 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:56 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370001d70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:49:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:57 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:57 compute-1 python3.9[93591]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:49:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:57 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388004390 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:57 compute-1 ceph-mon[79167]: pgmap v112: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Oct 10 09:49:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:57.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:49:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:49:58.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:49:58 compute-1 sudo[93741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luaxrtsewvjdhotjajprlckudxaxtbcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089798.190757-997-206561830448357/AnsiballZ_systemd.py'
Oct 10 09:49:58 compute-1 sudo[93741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:49:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:58 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378003ff0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:59 compute-1 python3.9[93743]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:49:59 compute-1 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 10 09:49:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:59 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370002d40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:59 compute-1 systemd[1]: tuned.service: Deactivated successfully.
Oct 10 09:49:59 compute-1 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 10 09:49:59 compute-1 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 10 09:49:59 compute-1 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 10 09:49:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:49:59 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:49:59 compute-1 ceph-mon[79167]: pgmap v113: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 255 B/s wr, 0 op/s
Oct 10 09:49:59 compute-1 sudo[93741]: pam_unix(sudo:session): session closed for user root
Oct 10 09:49:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:49:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:49:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:49:59.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:50:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:00.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:00 compute-1 python3.9[93905]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 10 09:50:00 compute-1 ceph-mon[79167]: overall HEALTH_OK
Oct 10 09:50:00 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:00 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388004390 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:01 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004010 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:01 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370002d40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:01 compute-1 ceph-mon[79167]: pgmap v114: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:01 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:50:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:01.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:02.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:50:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:02 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:03 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388004390 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:03 compute-1 sudo[94057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eatxhdohjvrdmdjyihcvbkmihtcqalww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089803.2101696-1168-123194944007650/AnsiballZ_systemd.py'
Oct 10 09:50:03 compute-1 sudo[94057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:50:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:03 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004030 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:03 compute-1 ceph-mon[79167]: pgmap v115: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:50:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:50:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:03.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:50:03 compute-1 python3.9[94059]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:50:04 compute-1 sudo[94057]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:04.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:04 compute-1 sudo[94211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttnvwsaxlvdjkxituywvmbzqdytirckd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089804.1720269-1168-149123090667621/AnsiballZ_systemd.py'
Oct 10 09:50:04 compute-1 sudo[94211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:50:04 compute-1 python3.9[94213]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:50:04 compute-1 sudo[94211]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:04 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:04 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370002d40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:05 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:05 compute-1 sshd-session[88144]: Connection closed by 192.168.122.30 port 35846
Oct 10 09:50:05 compute-1 sshd-session[88141]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:50:05 compute-1 systemd[1]: session-39.scope: Deactivated successfully.
Oct 10 09:50:05 compute-1 systemd[1]: session-39.scope: Consumed 1min 8.481s CPU time.
Oct 10 09:50:05 compute-1 systemd-logind[789]: Session 39 logged out. Waiting for processes to exit.
Oct 10 09:50:05 compute-1 systemd-logind[789]: Removed session 39.
Oct 10 09:50:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:05 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388004390 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:05 compute-1 ceph-mon[79167]: pgmap v116: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:05.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:06.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:06 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004050 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:50:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:07 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370002d40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:07 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:07 compute-1 ceph-mon[79167]: pgmap v117: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:50:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:07.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:50:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:08.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:08 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388004390 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004070 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:09 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370002d40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:09 compute-1 ceph-mon[79167]: pgmap v118: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:50:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:09.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:50:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:10.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:10 compute-1 sudo[94243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:50:10 compute-1 sudo[94243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:50:10 compute-1 sudo[94243]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:10 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:11 compute-1 sshd-session[94268]: Accepted publickey for zuul from 192.168.122.30 port 53018 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:50:11 compute-1 systemd-logind[789]: New session 40 of user zuul.
Oct 10 09:50:11 compute-1 systemd[1]: Started Session 40 of User zuul.
Oct 10 09:50:11 compute-1 sshd-session[94268]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:50:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:11 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388004390 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:11 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004090 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:11 compute-1 ceph-mon[79167]: pgmap v119: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:50:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:11.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:50:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:50:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:12.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:50:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:50:12 compute-1 python3.9[94422]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:50:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:12 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370002d40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:13 compute-1 sudo[94578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaffjnguawysyjzbkxiyjrclpnudhxzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089812.9006336-69-110233256157163/AnsiballZ_getent.py'
Oct 10 09:50:13 compute-1 sudo[94578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:50:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:13 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:13 compute-1 python3.9[94580]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 10 09:50:13 compute-1 sudo[94578]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:13 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390002ad0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:13.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:13 compute-1 ceph-mon[79167]: pgmap v120: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:50:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:14.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:14 compute-1 sudo[94731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awwzdzhqajuxqhykrkixequjgcxxrlyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089814.2101026-105-278556265852738/AnsiballZ_setup.py'
Oct 10 09:50:14 compute-1 sudo[94731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:50:14 compute-1 python3.9[94733]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:50:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:14 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:15 compute-1 sudo[94731]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:15 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370002d40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:15 compute-1 sudo[94816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcuhbtlxngmwomqdcsazkzfgmhinkhiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089814.2101026-105-278556265852738/AnsiballZ_dnf.py'
Oct 10 09:50:15 compute-1 sudo[94816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:50:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:15 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:15 compute-1 python3.9[94818]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 10 09:50:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:50:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:15.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:50:15 compute-1 ceph-mon[79167]: pgmap v121: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:16.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:50:16 compute-1 sudo[94816]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:16 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:16 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390002ad0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:50:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:17 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004160 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:17 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370002d40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:50:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:17.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:50:17 compute-1 ceph-mon[79167]: pgmap v122: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:17 compute-1 sudo[94970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vailgafbeduxppvywgagdnawbklariav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089817.4764752-147-88736000364252/AnsiballZ_dnf.py'
Oct 10 09:50:17 compute-1 sudo[94970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:50:18 compute-1 python3.9[94972]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:50:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:50:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:18.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:50:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:18 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:19 compute-1 sudo[94970]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:19 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390002ad0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:19 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004180 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:19.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:19 compute-1 ceph-mon[79167]: pgmap v123: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:20.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:20 compute-1 sudo[95124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bskbdolzecxpgxhmiyeesewkpyxgehvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089819.4946632-171-155722345087422/AnsiballZ_systemd.py'
Oct 10 09:50:20 compute-1 sudo[95124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:50:20 compute-1 python3.9[95126]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 09:50:20 compute-1 sudo[95124]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:20 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370002d40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:21 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:21 compute-1 python3.9[95280]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:50:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:21 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:21.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:21 compute-1 ceph-mon[79167]: pgmap v124: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:22.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:50:22 compute-1 sudo[95431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcnixtkomjfxaxdjzmnbsumjlrqcptbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089821.759897-225-29513440166349/AnsiballZ_sefcontext.py'
Oct 10 09:50:22 compute-1 sudo[95431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:50:22 compute-1 python3.9[95433]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 10 09:50:22 compute-1 ceph-mon[79167]: pgmap v125: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:50:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:22 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004180 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:23 compute-1 sudo[95431]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:23 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370002d40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:23 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:23.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:23 compute-1 python3.9[95584]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:50:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:24.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:24 compute-1 sudo[95740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyyuqliuuabxcysksinxjqhxmabixoua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089824.4196274-279-102380756075672/AnsiballZ_dnf.py'
Oct 10 09:50:24 compute-1 sudo[95740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:50:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:24 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:25 compute-1 python3.9[95742]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:50:25 compute-1 ceph-mon[79167]: pgmap v126: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:25 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004180 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:25 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370002d40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:25.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:26.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:26 compute-1 sudo[95740]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:26 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:50:27 compute-1 sudo[95894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gublcgqmriaitdxvmtvyigtwchonsdwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089826.5075727-303-15819125570948/AnsiballZ_command.py'
Oct 10 09:50:27 compute-1 sudo[95894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:50:27 compute-1 ceph-mon[79167]: pgmap v127: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:27 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:27 compute-1 python3.9[95896]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:50:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:27 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004180 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:50:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:27.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:50:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:28.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:28 compute-1 sudo[95894]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:28 compute-1 sudo[96182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hegzezyzkjleqhqjtkiemfhledvjufvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089828.5050087-327-27428559802683/AnsiballZ_file.py'
Oct 10 09:50:28 compute-1 sudo[96182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:50:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:28 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370002d40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:29 compute-1 python3.9[96184]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 10 09:50:29 compute-1 sudo[96182]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:29 compute-1 ceph-mon[79167]: pgmap v128: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:29 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:29 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384002690 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:29.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:30 compute-1 python3.9[96335]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:50:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:50:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:30.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:50:30 compute-1 sudo[96460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:50:30 compute-1 sudo[96460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:50:30 compute-1 sudo[96460]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:30 compute-1 sudo[96512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibrvuxctbaiizpueaxlossswskjuhstg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089830.412264-375-74828430310482/AnsiballZ_dnf.py'
Oct 10 09:50:30 compute-1 sudo[96512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:50:30 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:30 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004180 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:31 compute-1 python3.9[96514]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:50:31 compute-1 ceph-mon[79167]: pgmap v129: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:31 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:50:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:31 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370002d40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:31 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:31.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:32.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:50:32 compute-1 sudo[96512]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:32 compute-1 sudo[96666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujnazvsuxhltkfhlckugujnifovaaexi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089832.5968056-402-40086724119398/AnsiballZ_dnf.py'
Oct 10 09:50:32 compute-1 sudo[96666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:50:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:32 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384001480 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:33 compute-1 python3.9[96668]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:50:33 compute-1 ceph-mon[79167]: pgmap v130: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:50:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:33 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004180 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:33 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370002d40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:50:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:33.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:50:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:50:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:34.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:50:34 compute-1 sudo[96666]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:34 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:34 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:35 compute-1 sudo[96820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnrwdoocbjmefundtukklotpofxrxtfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089834.8552358-438-60349051200912/AnsiballZ_stat.py'
Oct 10 09:50:35 compute-1 sudo[96820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:50:35 compute-1 python3.9[96822]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:50:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:35 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa384001480 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:35 compute-1 sudo[96820]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:35 compute-1 ceph-mon[79167]: pgmap v131: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:35 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004180 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:50:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:35.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:50:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:36.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:36 compute-1 sudo[96975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxegfiactjcyglyanodhkipfsnmtmpgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089835.757291-462-48275370768476/AnsiballZ_slurp.py'
Oct 10 09:50:36 compute-1 sudo[96975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:50:36 compute-1 python3.9[96977]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Oct 10 09:50:36 compute-1 sudo[96975]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:36 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370002d40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:50:37 compute-1 sshd-session[94271]: Connection closed by 192.168.122.30 port 53018
Oct 10 09:50:37 compute-1 sshd-session[94268]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:50:37 compute-1 systemd[1]: session-40.scope: Deactivated successfully.
Oct 10 09:50:37 compute-1 systemd[1]: session-40.scope: Consumed 19.894s CPU time.
Oct 10 09:50:37 compute-1 systemd-logind[789]: Session 40 logged out. Waiting for processes to exit.
Oct 10 09:50:37 compute-1 systemd-logind[789]: Removed session 40.
Oct 10 09:50:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:37 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:37 compute-1 ceph-mon[79167]: pgmap v132: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:37 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:37.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:38.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:39 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004180 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:39 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370002d40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:39 compute-1 ceph-mon[79167]: pgmap v133: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:39 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:39.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:40.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:41 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:41 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3780041a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:41 compute-1 ceph-mon[79167]: pgmap v134: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:50:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:41 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370003e40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095041 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 09:50:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:41.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:50:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:42.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:43 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370003e40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:43 compute-1 sshd-session[97005]: Accepted publickey for zuul from 192.168.122.30 port 58476 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:50:43 compute-1 systemd-logind[789]: New session 41 of user zuul.
Oct 10 09:50:43 compute-1 systemd[1]: Started Session 41 of User zuul.
Oct 10 09:50:43 compute-1 sshd-session[97005]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:50:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:43 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3840014c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:43 compute-1 ceph-mon[79167]: pgmap v135: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:50:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:43 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3780041c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:50:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:43.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:50:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:50:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:44.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:50:44 compute-1 python3.9[97159]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:50:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:45 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:45 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370003e40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:45 compute-1 ceph-mon[79167]: pgmap v136: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:50:45 compute-1 python3.9[97315]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:50:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:45 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388002240 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:50:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:45.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:50:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:46.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:46 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:50:46 compute-1 python3.9[97508]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:50:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:47 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004250 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:50:47 compute-1 sshd-session[97008]: Connection closed by 192.168.122.30 port 58476
Oct 10 09:50:47 compute-1 sshd-session[97005]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:50:47 compute-1 systemd[1]: session-41.scope: Deactivated successfully.
Oct 10 09:50:47 compute-1 systemd[1]: session-41.scope: Consumed 2.624s CPU time.
Oct 10 09:50:47 compute-1 systemd-logind[789]: Session 41 logged out. Waiting for processes to exit.
Oct 10 09:50:47 compute-1 systemd-logind[789]: Removed session 41.
Oct 10 09:50:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:47 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:47 compute-1 ceph-mon[79167]: pgmap v137: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:50:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:47 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370003e40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:47.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:48.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:49 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa388002240 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:49 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004270 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:49 compute-1 ceph-mon[79167]: pgmap v138: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:50:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:49 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:49.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:50.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:50 compute-1 sudo[97536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:50:50 compute-1 sudo[97536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:50:50 compute-1 sudo[97536]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:51 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370003e40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:51 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 09:50:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:51 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370003e40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:51 compute-1 ceph-mon[79167]: pgmap v139: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:50:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:51 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:50:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:51.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:50:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:50:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:50:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:52.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:50:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:53 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:53 compute-1 sshd-session[97563]: Accepted publickey for zuul from 192.168.122.30 port 34956 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:50:53 compute-1 systemd-logind[789]: New session 42 of user zuul.
Oct 10 09:50:53 compute-1 systemd[1]: Started Session 42 of User zuul.
Oct 10 09:50:53 compute-1 sshd-session[97563]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:50:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:53 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370003e40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:53 compute-1 ceph-mon[79167]: pgmap v140: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:50:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:53 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390003760 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:53.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:53 compute-1 sudo[97644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:50:53 compute-1 sudo[97644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:50:53 compute-1 sudo[97644]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:54 compute-1 sudo[97692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 09:50:54 compute-1 sudo[97692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:50:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:54.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:54 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:54 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 09:50:54 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:54 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 09:50:54 compute-1 python3.9[97767]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:50:54 compute-1 sudo[97692]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:55 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004320 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:55 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:55 compute-1 python3.9[97952]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:50:55 compute-1 ceph-mon[79167]: pgmap v141: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:50:55 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:50:55 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:50:55 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:50:55 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:50:55 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:50:55 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:50:55 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:50:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:55 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:50:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:55.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:50:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:50:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:56.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:50:56 compute-1 sudo[98107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnnspflbcxpgkvgavuczdryyydioertg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089855.9406757-81-254454023908315/AnsiballZ_setup.py'
Oct 10 09:50:56 compute-1 sudo[98107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:50:56 compute-1 python3.9[98109]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:50:56 compute-1 sudo[98107]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:57 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa390003760 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:50:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:57 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 09:50:57 compute-1 sudo[98191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srvljzrrevtsabdstrcpiszumyyceswj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089855.9406757-81-254454023908315/AnsiballZ_dnf.py'
Oct 10 09:50:57 compute-1 sudo[98191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:50:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:57 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004320 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:57 compute-1 python3.9[98193]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:50:57 compute-1 ceph-mon[79167]: pgmap v142: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:50:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:57 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:57.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:50:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:50:58.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:50:58 compute-1 sudo[98191]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:59 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370003e40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:59 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370003e40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:59 compute-1 sudo[98225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 09:50:59 compute-1 sudo[98225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:50:59 compute-1 sudo[98225]: pam_unix(sudo:session): session closed for user root
Oct 10 09:50:59 compute-1 ceph-mon[79167]: pgmap v143: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:50:59 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:50:59 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:50:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:50:59 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004320 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:50:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:50:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:50:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:50:59.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:50:59 compute-1 sudo[98371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pycnxrobwxyittfohdpveqewtkrhwcvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089859.5736332-117-235855253229101/AnsiballZ_setup.py'
Oct 10 09:50:59 compute-1 sudo[98371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:00 compute-1 python3.9[98373]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:51:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:00.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:00 compute-1 sudo[98371]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:51:01 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:51:01 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370003e40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:01 compute-1 sudo[98567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gboyztilugsfkmaqrbkgwldcohufuqgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089860.9020498-150-279928106746171/AnsiballZ_file.py'
Oct 10 09:51:01 compute-1 sshd-session[70591]: Received disconnect from 38.102.83.82 port 47934:11: disconnected by user
Oct 10 09:51:01 compute-1 sshd-session[70591]: Disconnected from user zuul 38.102.83.82 port 47934
Oct 10 09:51:01 compute-1 sudo[98567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:01 compute-1 sshd-session[70588]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:51:01 compute-1 systemd-logind[789]: Session 20 logged out. Waiting for processes to exit.
Oct 10 09:51:01 compute-1 systemd[1]: session-20.scope: Deactivated successfully.
Oct 10 09:51:01 compute-1 systemd[1]: session-20.scope: Consumed 10.156s CPU time.
Oct 10 09:51:01 compute-1 systemd-logind[789]: Removed session 20.
Oct 10 09:51:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:51:01 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370003e40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:01 compute-1 ceph-mon[79167]: pgmap v144: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:51:01 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:51:01 compute-1 python3.9[98569]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:51:01 compute-1 sudo[98567]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:51:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:01.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:51:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:51:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:51:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:02.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:51:02 compute-1 sudo[98719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkotsyjnsdktmspnqauqprculwbdmdcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089861.9195473-174-87111666336786/AnsiballZ_command.py'
Oct 10 09:51:02 compute-1 sudo[98719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:02 compute-1 python3.9[98721]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:51:02 compute-1 sudo[98719]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:51:03 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa378004320 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:51:03 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa3a0004b70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:03 compute-1 sudo[98885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmzgnouuuyjuatagjfwvoqdwtokgouqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089862.970746-198-153901213056259/AnsiballZ_stat.py'
Oct 10 09:51:03 compute-1 sudo[98885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:51:03 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370003e40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:03 compute-1 python3.9[98887]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:51:03 compute-1 ceph-mon[79167]: pgmap v145: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:51:03 compute-1 sudo[98885]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095103 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 09:51:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:03.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:04 compute-1 sudo[98963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbjtaznvbmbgllxxmuhkakqtbdwvtrie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089862.970746-198-153901213056259/AnsiballZ_file.py'
Oct 10 09:51:04 compute-1 sudo[98963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:04 compute-1 python3.9[98965]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:51:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:51:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:04.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:51:04 compute-1 sudo[98963]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:04 compute-1 sudo[99115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkddabqsdjxummlbsqnagixwtmjauwge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089864.4446597-234-57124975361061/AnsiballZ_stat.py'
Oct 10 09:51:04 compute-1 sudo[99115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:04 compute-1 python3.9[99117]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:51:04 compute-1 sudo[99115]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:05 compute-1 kernel: ganesha.nfsd[91239]: segfault at 50 ip 00007fa45499c32e sp 00007fa40bffe210 error 4 in libntirpc.so.5.8[7fa454981000+2c000] likely on CPU 1 (core 0, socket 1)
Oct 10 09:51:05 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 09:51:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[85241]: 10/10/2025 09:51:05 : epoch 68e8d61a : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa370003e40 fd 49 proxy ignored for local
Oct 10 09:51:05 compute-1 systemd[1]: Created slice Slice /system/systemd-coredump.
Oct 10 09:51:05 compute-1 systemd[1]: Started Process Core Dump (PID 99130/UID 0).
Oct 10 09:51:05 compute-1 sudo[99195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afdidukcpkjjpyuviqkfjdkfkvkehziu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089864.4446597-234-57124975361061/AnsiballZ_file.py'
Oct 10 09:51:05 compute-1 sudo[99195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:05 compute-1 python3.9[99198]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:51:05 compute-1 sudo[99195]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:05 compute-1 ceph-mon[79167]: pgmap v146: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:51:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:51:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:05.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:51:06 compute-1 sudo[99348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqljpuvqtzvqkyjlqhtvqauvehthcqpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089865.6778316-273-150660223768786/AnsiballZ_ini_file.py'
Oct 10 09:51:06 compute-1 sudo[99348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:51:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:06.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:51:06 compute-1 systemd-coredump[99144]: Process 85245 (ganesha.nfsd) of user 0 dumped core.
                                                   
                                                   Stack trace of thread 65:
                                                   #0  0x00007fa45499c32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                   ELF object binary architecture: AMD x86-64
Oct 10 09:51:06 compute-1 systemd[1]: systemd-coredump@0-99130-0.service: Deactivated successfully.
Oct 10 09:51:06 compute-1 python3.9[99350]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:51:06 compute-1 systemd[1]: systemd-coredump@0-99130-0.service: Consumed 1.240s CPU time.
Oct 10 09:51:06 compute-1 sudo[99348]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:06 compute-1 podman[99355]: 2025-10-10 09:51:06.421970769 +0000 UTC m=+0.030280238 container died 2391dd632d14ec9648c3d8d1edd069f6584c3097475f03ae8ea909b98a6066a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:51:06 compute-1 systemd[1]: var-lib-containers-storage-overlay-32b88a7cec485365e9b39c695c6cd554fe2d4deeb9799c6b37cc487351d505c2-merged.mount: Deactivated successfully.
Oct 10 09:51:06 compute-1 podman[99355]: 2025-10-10 09:51:06.496815917 +0000 UTC m=+0.105125336 container remove 2391dd632d14ec9648c3d8d1edd069f6584c3097475f03ae8ea909b98a6066a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct 10 09:51:06 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Main process exited, code=exited, status=139/n/a
Oct 10 09:51:06 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Failed with result 'exit-code'.
Oct 10 09:51:06 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 2.356s CPU time.
Oct 10 09:51:06 compute-1 sudo[99547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqywoziaakvdppcjecghlgpjobupyyku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089866.5417006-273-159643553917421/AnsiballZ_ini_file.py'
Oct 10 09:51:06 compute-1 sudo[99547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:07 compute-1 python3.9[99549]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:51:07 compute-1 sudo[99547]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:51:07 compute-1 sudo[99700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbqryjlyjfdyapvzfxdenjdbudadznjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089867.3421645-273-230263351008941/AnsiballZ_ini_file.py'
Oct 10 09:51:07 compute-1 sudo[99700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:07 compute-1 ceph-mon[79167]: pgmap v147: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:51:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:07.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:07 compute-1 python3.9[99702]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:51:07 compute-1 sudo[99700]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:51:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:08.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:51:08 compute-1 sudo[99852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmtatazvzuzkqkhvrztmrbtburqeozjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089868.1655147-273-163719539421971/AnsiballZ_ini_file.py'
Oct 10 09:51:08 compute-1 sudo[99852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:08 compute-1 python3.9[99854]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:51:08 compute-1 sudo[99852]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:09 compute-1 sudo[100005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpvwqpjzxadmpkfpwsffcevjcarjgrwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089869.1440012-366-231758685465033/AnsiballZ_dnf.py'
Oct 10 09:51:09 compute-1 sudo[100005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:09 compute-1 python3.9[100007]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:51:09 compute-1 ceph-mon[79167]: pgmap v148: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:51:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:09.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:10.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:10 compute-1 sudo[100009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:51:10 compute-1 sudo[100009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:51:10 compute-1 sudo[100009]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:10 compute-1 sudo[100005]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095111 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 09:51:11 compute-1 ceph-mon[79167]: pgmap v149: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:51:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:51:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:11.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:51:11 compute-1 sudo[100184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbyeueaqaitgdoqoxbxgcwdbxcjhyazj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089871.575975-399-249160035168039/AnsiballZ_setup.py'
Oct 10 09:51:11 compute-1 sudo[100184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:51:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:12.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:12 compute-1 python3.9[100186]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:51:12 compute-1 sudo[100184]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:13 compute-1 sudo[100338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieilynuimbzigkolsrblwepkbpchqpdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089872.758167-423-94501469558712/AnsiballZ_stat.py'
Oct 10 09:51:13 compute-1 sudo[100338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:13 compute-1 python3.9[100340]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:51:13 compute-1 sudo[100338]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:13 compute-1 ceph-mon[79167]: pgmap v150: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:51:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:51:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:13.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:51:13 compute-1 sudo[100491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxqnqnspnzhawcvxxeorjvegtrvcnycf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089873.5672681-450-74487093203290/AnsiballZ_stat.py'
Oct 10 09:51:13 compute-1 sudo[100491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:14 compute-1 python3.9[100493]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:51:14 compute-1 sudo[100491]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:14.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:14 compute-1 sudo[100643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixptqxdjizubufqvyaaksfojzcrvckck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089874.5222259-480-214834270806985/AnsiballZ_service_facts.py'
Oct 10 09:51:15 compute-1 sudo[100643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:15 compute-1 python3.9[100645]: ansible-service_facts Invoked
Oct 10 09:51:15 compute-1 network[100663]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 09:51:15 compute-1 network[100664]: 'network-scripts' will be removed from distribution in near future.
Oct 10 09:51:15 compute-1 network[100665]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 09:51:15 compute-1 ceph-mon[79167]: pgmap v151: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:51:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:15.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:51:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:16.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:51:16 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Scheduled restart job, restart counter is at 1.
Oct 10 09:51:16 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:51:16 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 2.356s CPU time.
Oct 10 09:51:16 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:51:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:51:17 compute-1 podman[100752]: 2025-10-10 09:51:17.024148902 +0000 UTC m=+0.049651477 container create 38469aeeacb4e5fd5cce3c07da0fa2ff7ec854adc34a8c8ac6ec34fa6024b1ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:51:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bdcad42e292001657325bb58d2d66242ee3ebf8e20268f3dc10a8f21749e3ac/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 09:51:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bdcad42e292001657325bb58d2d66242ee3ebf8e20268f3dc10a8f21749e3ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:51:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bdcad42e292001657325bb58d2d66242ee3ebf8e20268f3dc10a8f21749e3ac/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:51:17 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bdcad42e292001657325bb58d2d66242ee3ebf8e20268f3dc10a8f21749e3ac/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.mssvzx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:51:17 compute-1 podman[100752]: 2025-10-10 09:51:17.095900216 +0000 UTC m=+0.121402881 container init 38469aeeacb4e5fd5cce3c07da0fa2ff7ec854adc34a8c8ac6ec34fa6024b1ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 10 09:51:17 compute-1 podman[100752]: 2025-10-10 09:51:17.004231544 +0000 UTC m=+0.029734149 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:51:17 compute-1 podman[100752]: 2025-10-10 09:51:17.105685495 +0000 UTC m=+0.131188090 container start 38469aeeacb4e5fd5cce3c07da0fa2ff7ec854adc34a8c8ac6ec34fa6024b1ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 10 09:51:17 compute-1 bash[100752]: 38469aeeacb4e5fd5cce3c07da0fa2ff7ec854adc34a8c8ac6ec34fa6024b1ed
Oct 10 09:51:17 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:51:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:17 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 09:51:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:17 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 09:51:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:17 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 09:51:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:17 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 09:51:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:17 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 09:51:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:17 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 09:51:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:51:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:17 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 09:51:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:17 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 09:51:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:51:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:17.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:51:17 compute-1 ceph-mon[79167]: pgmap v152: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:51:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:51:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:18.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:51:19 compute-1 ceph-mon[79167]: pgmap v153: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 10 09:51:19 compute-1 sudo[100643]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:19.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:51:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:20.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:51:20 compute-1 sudo[101056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqlrkqhestukzwnkyqhschueuyduipkk ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1760089880.042233-519-75756004689501/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1760089880.042233-519-75756004689501/args'
Oct 10 09:51:20 compute-1 sudo[101056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:20 compute-1 sudo[101056]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:21 compute-1 sudo[101223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfxtbypgkresfawhasjlncaaaelwjrdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089880.9189985-552-182032529622263/AnsiballZ_dnf.py'
Oct 10 09:51:21 compute-1 sudo[101223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:21 compute-1 ceph-mon[79167]: pgmap v154: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 10 09:51:21 compute-1 python3.9[101225]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:51:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:21.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:51:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:51:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:22.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:51:22 compute-1 sudo[101223]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:23 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 09:51:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:23 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 09:51:23 compute-1 ceph-mon[79167]: pgmap v155: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:51:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:23.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:23 compute-1 sudo[101378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbfreyuhppebpjxcbkbsethbmrmwfahn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089883.2682915-591-58243704466625/AnsiballZ_package_facts.py'
Oct 10 09:51:23 compute-1 sudo[101378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:51:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:24.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:51:24 compute-1 python3.9[101380]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 10 09:51:24 compute-1 sudo[101378]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:25 compute-1 ceph-mon[79167]: pgmap v156: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 596 B/s wr, 1 op/s
Oct 10 09:51:25 compute-1 sudo[101531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxgfarqwgrquvuvoablokhllatzgtiqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089885.2911935-622-174062803015959/AnsiballZ_stat.py'
Oct 10 09:51:25 compute-1 sudo[101531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:25 compute-1 python3.9[101533]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:51:25 compute-1 sudo[101531]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:25.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:26 compute-1 sudo[101609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzycvbwjwknvcikfpkorgzezkxoooapg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089885.2911935-622-174062803015959/AnsiballZ_file.py'
Oct 10 09:51:26 compute-1 sudo[101609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:51:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:26.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:51:26 compute-1 python3.9[101611]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:51:26 compute-1 sudo[101609]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:27 compute-1 sudo[101761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjifdezwfzpujeukmijjnralzelekmwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089886.7029068-658-7226070011421/AnsiballZ_stat.py'
Oct 10 09:51:27 compute-1 sudo[101761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:51:27 compute-1 python3.9[101763]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:51:27 compute-1 sudo[101761]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:27 compute-1 ceph-mon[79167]: pgmap v157: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:51:27 compute-1 sudo[101840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrsukbyzxnclwbtkntqgplqcclkrnhfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089886.7029068-658-7226070011421/AnsiballZ_file.py'
Oct 10 09:51:27 compute-1 sudo[101840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:27 compute-1 python3.9[101842]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:51:27 compute-1 sudo[101840]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:51:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:27.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:51:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:28.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 09:51:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 09:51:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 09:51:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 09:51:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 09:51:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 09:51:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 09:51:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 09:51:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 09:51:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 09:51:29 compute-1 sudo[101992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rscxakqvqekhvrlcbavmymqsfejzrsff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089888.7648668-712-31693631578994/AnsiballZ_lineinfile.py'
Oct 10 09:51:29 compute-1 sudo[101992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 09:51:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 09:51:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 09:51:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 09:51:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 09:51:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 09:51:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 09:51:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 09:51:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 09:51:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 09:51:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 09:51:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 09:51:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 09:51:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 09:51:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 09:51:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 09:51:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 09:51:29 compute-1 ceph-mon[79167]: pgmap v158: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:51:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:29 compute-1 python3.9[101995]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:51:29 compute-1 sudo[101992]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0001c40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:51:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:29.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:51:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:30.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:30 compute-1 sudo[102161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woxixtaikfdzieydxyvysflbxwcfrmfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089890.5226853-756-193624724954036/AnsiballZ_setup.py'
Oct 10 09:51:30 compute-1 sudo[102161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:31 compute-1 sudo[102164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:51:31 compute-1 sudo[102164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:51:31 compute-1 sudo[102164]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:31 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:31 compute-1 python3.9[102163]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:51:31 compute-1 ceph-mon[79167]: pgmap v159: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 09:51:31 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:51:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:31 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:31 compute-1 sudo[102161]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:31 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:31.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:32 compute-1 sudo[102271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hckcfqsduepkhcgusensxgvanbjvtlha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089890.5226853-756-193624724954036/AnsiballZ_systemd.py'
Oct 10 09:51:32 compute-1 sudo[102271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:51:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:32.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:32 compute-1 python3.9[102273]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:51:32 compute-1 sudo[102271]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095133 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 09:51:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:33 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0001c40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:33 compute-1 sshd-session[97567]: Connection closed by 192.168.122.30 port 34956
Oct 10 09:51:33 compute-1 sshd-session[97563]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:51:33 compute-1 systemd-logind[789]: Session 42 logged out. Waiting for processes to exit.
Oct 10 09:51:33 compute-1 systemd[1]: session-42.scope: Deactivated successfully.
Oct 10 09:51:33 compute-1 systemd[1]: session-42.scope: Consumed 28.162s CPU time.
Oct 10 09:51:33 compute-1 systemd-logind[789]: Removed session 42.
Oct 10 09:51:33 compute-1 ceph-mon[79167]: pgmap v160: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 09:51:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:33 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0001c40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:33 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:33.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:51:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:34.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:51:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:35 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:35 compute-1 ceph-mon[79167]: pgmap v161: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:51:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:35 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b800016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:35 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0002f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:51:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:35.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:51:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:51:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:36.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:51:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:37 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:51:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:37 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:37 compute-1 ceph-mon[79167]: pgmap v162: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:51:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:37 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:51:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:37.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:51:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:51:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:38.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:51:38 compute-1 sshd-session[102303]: Accepted publickey for zuul from 192.168.122.30 port 54582 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:51:38 compute-1 systemd-logind[789]: New session 43 of user zuul.
Oct 10 09:51:39 compute-1 systemd[1]: Started Session 43 of User zuul.
Oct 10 09:51:39 compute-1 sshd-session[102303]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:51:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:39 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0002f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:39 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:39 compute-1 ceph-mon[79167]: pgmap v163: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Oct 10 09:51:39 compute-1 sudo[102457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jovfnwktcyrlqzxykdkwzgsbzbfojkja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089899.1359086-27-214226110274951/AnsiballZ_file.py'
Oct 10 09:51:39 compute-1 sudo[102457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:39 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:51:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:39.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:51:39 compute-1 python3.9[102459]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:51:39 compute-1 sudo[102457]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:51:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:40.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:51:40 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095140 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 09:51:40 compute-1 sudo[102609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puxiymgycmszdqrdauqodggnkqeylilu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089900.1313546-63-263304422857492/AnsiballZ_stat.py'
Oct 10 09:51:40 compute-1 sudo[102609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:40 compute-1 python3.9[102611]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:51:40 compute-1 sudo[102609]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:41 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:41 compute-1 sudo[102687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apwwwirhasxviklcgmnbsoqqnjyqmbzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089900.1313546-63-263304422857492/AnsiballZ_file.py'
Oct 10 09:51:41 compute-1 sudo[102687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:41 compute-1 python3.9[102689]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:51:41 compute-1 sudo[102687]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:41 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0002f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:41 compute-1 ceph-mon[79167]: pgmap v164: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:51:41 compute-1 sshd-session[102306]: Connection closed by 192.168.122.30 port 54582
Oct 10 09:51:41 compute-1 sshd-session[102303]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:51:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:41 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:41 compute-1 systemd[1]: session-43.scope: Deactivated successfully.
Oct 10 09:51:41 compute-1 systemd[1]: session-43.scope: Consumed 2.030s CPU time.
Oct 10 09:51:41 compute-1 systemd-logind[789]: Session 43 logged out. Waiting for processes to exit.
Oct 10 09:51:41 compute-1 systemd-logind[789]: Removed session 43.
Oct 10 09:51:41 compute-1 systemd[82226]: Created slice User Background Tasks Slice.
Oct 10 09:51:41 compute-1 systemd[82226]: Starting Cleanup of User's Temporary Files and Directories...
Oct 10 09:51:41 compute-1 systemd[82226]: Finished Cleanup of User's Temporary Files and Directories.
Oct 10 09:51:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:41.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:51:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:42.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:43 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:43 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:43 compute-1 ceph-mon[79167]: pgmap v165: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:51:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:43 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0002f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:43.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:51:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:44.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:51:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:45 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:45 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:45 compute-1 ceph-mon[79167]: pgmap v166: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:51:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:45 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b800032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:45.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:51:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:46.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:51:46 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:51:46 compute-1 sshd-session[102719]: Accepted publickey for zuul from 192.168.122.30 port 60804 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:51:46 compute-1 systemd-logind[789]: New session 44 of user zuul.
Oct 10 09:51:46 compute-1 systemd[1]: Started Session 44 of User zuul.
Oct 10 09:51:46 compute-1 sshd-session[102719]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:51:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:47 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0002f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:51:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:47 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:47 compute-1 ceph-mon[79167]: pgmap v167: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:51:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:47 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:47 compute-1 python3.9[102873]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:51:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:47.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:51:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:48.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:51:48 compute-1 sudo[103027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aarqrjmqnosmkbwuqmxoevbqshqwdjib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089908.4088268-60-215100028016040/AnsiballZ_file.py'
Oct 10 09:51:48 compute-1 sudo[103027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:49 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b800032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:49 compute-1 python3.9[103029]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:51:49 compute-1 sudo[103027]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:49 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0002f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:49 compute-1 ceph-mon[79167]: pgmap v168: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 10 09:51:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:49 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:49 compute-1 sudo[103203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcutnqqyazapbmakxvqtjobhuoocxige ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089909.3782449-84-2693757043039/AnsiballZ_stat.py'
Oct 10 09:51:49 compute-1 sudo[103203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:49.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:50 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 09:51:50 compute-1 python3.9[103205]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:51:50 compute-1 sudo[103203]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:50.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:50 compute-1 sudo[103281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgskglbjpbqpiypuchrlckqynnduvmkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089909.3782449-84-2693757043039/AnsiballZ_file.py'
Oct 10 09:51:50 compute-1 sudo[103281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:50 compute-1 python3.9[103283]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.4020x0r0 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:51:50 compute-1 sudo[103281]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:51 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:51 compute-1 sudo[103308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:51:51 compute-1 sudo[103308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:51:51 compute-1 sudo[103308]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:51 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:51 compute-1 sudo[103459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brrunltjhciwoymksxpiyykopuxlwfjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089911.228123-144-252032831723507/AnsiballZ_stat.py'
Oct 10 09:51:51 compute-1 sudo[103459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:51 compute-1 ceph-mon[79167]: pgmap v169: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 10 09:51:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:51 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:51 compute-1 python3.9[103461]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:51:51 compute-1 sudo[103459]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:51.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:52 compute-1 sudo[103537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnuvklwqhjocnbyxzoonukhxztbsleca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089911.228123-144-252032831723507/AnsiballZ_file.py'
Oct 10 09:51:52 compute-1 sudo[103537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:51:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:51:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:52.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:51:52 compute-1 python3.9[103539]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.0d8o8r3m recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:51:52 compute-1 sudo[103537]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:52 compute-1 sudo[103689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhkcsxheavgqfpcgukvuozlatsclbhrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089912.6467688-183-181106435017828/AnsiballZ_file.py'
Oct 10 09:51:52 compute-1 sudo[103689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:53 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:53 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 09:51:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:53 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 09:51:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:53 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 09:51:53 compute-1 python3.9[103691]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:51:53 compute-1 sudo[103689]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:53 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:53 compute-1 ceph-mon[79167]: pgmap v170: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:51:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:53 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:53 compute-1 sudo[103842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urswqxnxavhszpoeiywelrsrolkxnahm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089913.4376264-207-4009468885455/AnsiballZ_stat.py'
Oct 10 09:51:53 compute-1 sudo[103842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:53.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:53 compute-1 python3.9[103844]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:51:54 compute-1 sudo[103842]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:54 compute-1 sudo[103920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjizbuzofhgfbykkkrdtsampltdejvxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089913.4376264-207-4009468885455/AnsiballZ_file.py'
Oct 10 09:51:54 compute-1 sudo[103920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:54.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:54 compute-1 python3.9[103922]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:51:54 compute-1 sudo[103920]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:55 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:55 compute-1 sudo[104072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srmtitwnahtasuclhkrimotqhndpyqbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089914.714268-207-38088214821636/AnsiballZ_stat.py'
Oct 10 09:51:55 compute-1 sudo[104072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:55 compute-1 python3.9[104074]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:51:55 compute-1 sudo[104072]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:55 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:55 compute-1 sudo[104151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhjdkhqnxzymbyaibozgadfnenuutebd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089914.714268-207-38088214821636/AnsiballZ_file.py'
Oct 10 09:51:55 compute-1 sudo[104151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:55 compute-1 ceph-mon[79167]: pgmap v171: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:51:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:55 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:55 compute-1 python3.9[104153]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:51:55 compute-1 sudo[104151]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:55.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:56 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:56 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 09:51:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:56.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:56 compute-1 sudo[104303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzkwtoujquvihdiehgutpduzysgeyyvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089916.0198274-276-244034060935535/AnsiballZ_file.py'
Oct 10 09:51:56 compute-1 sudo[104303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:56 compute-1 python3.9[104305]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:51:56 compute-1 sudo[104303]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:57 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:57 compute-1 sudo[104455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrzbwsddikyoutwgyllaknpippfywyee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089916.8183668-300-235382581825009/AnsiballZ_stat.py'
Oct 10 09:51:57 compute-1 sudo[104455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:51:57 compute-1 python3.9[104457]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:51:57 compute-1 sudo[104455]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:57 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0002f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:57 compute-1 sudo[104534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qumjuzmhdcmlrqsojinjhtnvmltmdqtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089916.8183668-300-235382581825009/AnsiballZ_file.py'
Oct 10 09:51:57 compute-1 ceph-mon[79167]: pgmap v172: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:51:57 compute-1 sudo[104534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:57 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:57 compute-1 python3.9[104536]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:51:57 compute-1 sudo[104534]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:51:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:57.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:51:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:51:58.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:51:58 compute-1 sudo[104686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llxyfhdbjutzwdftmirnvlvpbomjqljd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089918.1449344-336-104643976989144/AnsiballZ_stat.py'
Oct 10 09:51:58 compute-1 sudo[104686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:58 compute-1 python3.9[104688]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:51:58 compute-1 sudo[104686]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:59 compute-1 sudo[104764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tantwbefjvjluckndepkohdlbvsmqtwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089918.1449344-336-104643976989144/AnsiballZ_file.py'
Oct 10 09:51:59 compute-1 sudo[104764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:51:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:59 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:59 compute-1 python3.9[104766]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:51:59 compute-1 sudo[104764]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:59 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:59 compute-1 ceph-mon[79167]: pgmap v173: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:51:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:51:59 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0002f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:51:59 compute-1 sudo[104844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:51:59 compute-1 sudo[104844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:51:59 compute-1 sudo[104844]: pam_unix(sudo:session): session closed for user root
Oct 10 09:51:59 compute-1 sudo[104869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 09:51:59 compute-1 sudo[104869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:51:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:51:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:51:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:51:59.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:00 compute-1 sudo[104967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfzklyqwvbmctwzmrprlafzshihbvmyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089919.486835-372-240071117696415/AnsiballZ_systemd.py'
Oct 10 09:52:00 compute-1 sudo[104967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:52:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:00.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:52:00 compute-1 python3.9[104969]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:52:00 compute-1 systemd[1]: Reloading.
Oct 10 09:52:00 compute-1 systemd-sysv-generator[105015]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:52:00 compute-1 systemd-rc-local-generator[105012]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:52:00 compute-1 sudo[104869]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:00 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:52:00 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:52:00 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:52:00 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:52:00 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:52:00 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:52:00 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:52:00 compute-1 sudo[104967]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:01 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:01 compute-1 sudo[105189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrgdlnqsjnwootbipsbvozzqogmhmwhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089920.9684415-396-37135287112126/AnsiballZ_stat.py'
Oct 10 09:52:01 compute-1 sudo[105189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:01 compute-1 python3.9[105192]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:52:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:01 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:01 compute-1 sudo[105189]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:01 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba4002010 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:01 compute-1 ceph-mon[79167]: pgmap v174: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 09:52:01 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:52:01 compute-1 sudo[105268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prkygdtxournkbozyrcztgunuldnlstu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089920.9684415-396-37135287112126/AnsiballZ_file.py'
Oct 10 09:52:01 compute-1 sudo[105268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:01.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:02 compute-1 python3.9[105270]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:02 compute-1 sudo[105268]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:52:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:02.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095202 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 09:52:02 compute-1 sudo[105420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naluwewczbomalddntuoueayafhsqjup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089922.3066392-432-138839425160770/AnsiballZ_stat.py'
Oct 10 09:52:02 compute-1 sudo[105420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:02 compute-1 python3.9[105422]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:52:02 compute-1 sudo[105420]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:03 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0002f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:03 compute-1 sudo[105498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knlfwgkulemvhpdkagmngpiatbuqxcrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089922.3066392-432-138839425160770/AnsiballZ_file.py'
Oct 10 09:52:03 compute-1 sudo[105498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:03 compute-1 python3.9[105500]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:03 compute-1 sudo[105498]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:03 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:03 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:03 compute-1 ceph-mon[79167]: pgmap v175: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 09:52:03 compute-1 sudo[105651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmsmerbppcrqczvrtphffmfydwufmjff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089923.5008786-468-245759889714252/AnsiballZ_systemd.py'
Oct 10 09:52:03 compute-1 sudo[105651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:03.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:04 compute-1 python3.9[105653]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:52:04 compute-1 systemd[1]: Reloading.
Oct 10 09:52:04 compute-1 systemd-rc-local-generator[105682]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:52:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:04.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:04 compute-1 systemd-sysv-generator[105686]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:52:04 compute-1 systemd[1]: Starting Create netns directory...
Oct 10 09:52:04 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 10 09:52:04 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 10 09:52:04 compute-1 systemd[1]: Finished Create netns directory.
Oct 10 09:52:04 compute-1 sudo[105651]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:05 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba4002010 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:05 compute-1 python3.9[105847]: ansible-ansible.builtin.service_facts Invoked
Oct 10 09:52:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:05 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0002f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:05 compute-1 network[105864]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 09:52:05 compute-1 network[105865]: 'network-scripts' will be removed from distribution in near future.
Oct 10 09:52:05 compute-1 network[105866]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 09:52:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:05 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:05 compute-1 ceph-mon[79167]: pgmap v176: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:52:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:05.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:06.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:06 compute-1 sudo[105883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 09:52:06 compute-1 sudo[105883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:52:06 compute-1 sudo[105883]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:07 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:52:07 compute-1 ceph-mon[79167]: pgmap v177: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:52:07 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:52:07 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:52:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:07 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba4002010 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:07 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0002f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:07.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:52:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:08.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:52:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:09 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:09 compute-1 ceph-mon[79167]: pgmap v178: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 10 09:52:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:09 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:09 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba4002010 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:09.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:10.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:10 compute-1 sudo[106156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoafwrxxrbouldhrdkoqbzyxevmclytd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089930.062113-546-187703102538090/AnsiballZ_stat.py'
Oct 10 09:52:10 compute-1 sudo[106156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:10 compute-1 python3.9[106158]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:52:10 compute-1 sudo[106156]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:10 compute-1 sudo[106234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cusrvwdtuarahkvdjcbljhyvbuklucyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089930.062113-546-187703102538090/AnsiballZ_file.py'
Oct 10 09:52:10 compute-1 sudo[106234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:11 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0002f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:11 compute-1 python3.9[106236]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:11 compute-1 sudo[106234]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:11 compute-1 sudo[106237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:52:11 compute-1 sudo[106237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:52:11 compute-1 sudo[106237]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:11 compute-1 ceph-mon[79167]: pgmap v179: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:52:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:11 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:11 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:11 compute-1 sudo[106412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeqknjnfsnbgngxrxjgahxpihceqnqnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089931.5003448-585-93537947329442/AnsiballZ_file.py'
Oct 10 09:52:11 compute-1 sudo[106412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:52:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:12.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:52:12 compute-1 python3.9[106414]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:12 compute-1 sudo[106412]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:52:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:12.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:12 compute-1 sudo[106564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebwdqwwgbjmvwqexvkhsbwdhevxdrukq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089932.3190215-609-142183092549973/AnsiballZ_stat.py'
Oct 10 09:52:12 compute-1 sudo[106564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:12 compute-1 python3.9[106566]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:52:12 compute-1 sudo[106564]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:13 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40091b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:13 compute-1 sudo[106642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tixdjjcucuyeywguenpupotopmknwrfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089932.3190215-609-142183092549973/AnsiballZ_file.py'
Oct 10 09:52:13 compute-1 sudo[106642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:13 compute-1 python3.9[106644]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:13 compute-1 sudo[106642]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:13 compute-1 ceph-mon[79167]: pgmap v180: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:52:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:13 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0002f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:13 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:14.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:14 compute-1 sudo[106795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nchgzkzsfpyujqfxtdirrzpnqhagtaor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089933.8322918-655-210764583061770/AnsiballZ_timezone.py'
Oct 10 09:52:14 compute-1 sudo[106795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:52:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:14.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:52:14 compute-1 python3.9[106797]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 10 09:52:14 compute-1 systemd[1]: Starting Time & Date Service...
Oct 10 09:52:14 compute-1 systemd[1]: Started Time & Date Service.
Oct 10 09:52:14 compute-1 sudo[106795]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:15 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:15 compute-1 sudo[106952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxqtzqqhxqvdizwmagxjcvtzrksiadmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089935.1302817-681-132819614246117/AnsiballZ_file.py'
Oct 10 09:52:15 compute-1 sudo[106952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:15 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40091b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:15 compute-1 ceph-mon[79167]: pgmap v181: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:15 compute-1 python3.9[106954]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:15 compute-1 sudo[106952]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:15 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40091b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:16.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:16 compute-1 sudo[107104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzlgkhqeqkjnjoevhdyubdamwkdbfpkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089935.9646387-705-11726939354178/AnsiballZ_stat.py'
Oct 10 09:52:16 compute-1 sudo[107104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:16.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:52:16 compute-1 python3.9[107106]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:52:16 compute-1 sudo[107104]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:16 compute-1 sudo[107182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cobkwvomldznkvldzdxrclhxvhovahxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089935.9646387-705-11726939354178/AnsiballZ_file.py'
Oct 10 09:52:16 compute-1 sudo[107182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:17 compute-1 python3.9[107184]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:17 compute-1 sudo[107182]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:17 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0002f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:52:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:17 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:17 compute-1 ceph-mon[79167]: pgmap v182: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:17 compute-1 sudo[107335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdayaibvzdcyjgermvcpzrsqnhghawfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089937.2569947-742-18149814753074/AnsiballZ_stat.py'
Oct 10 09:52:17 compute-1 sudo[107335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:17 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:17 compute-1 python3.9[107337]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:52:17 compute-1 sudo[107335]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:18.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:18 compute-1 sudo[107413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpsgjszeubqwlyxmphvbbtobwbczpgew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089937.2569947-742-18149814753074/AnsiballZ_file.py'
Oct 10 09:52:18 compute-1 sudo[107413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:18.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:18 compute-1 python3.9[107415]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.5vj5rwhm recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:18 compute-1 sudo[107413]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:18 compute-1 sudo[107565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmupicwidtwqgwayuurvdarcgyfetpea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089938.5954194-777-130575627326878/AnsiballZ_stat.py'
Oct 10 09:52:18 compute-1 sudo[107565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:19 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:19 compute-1 python3.9[107567]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:52:19 compute-1 sudo[107565]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:19 compute-1 sudo[107644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oeyfynmvyvbeyxfglocrudzlpfpdmkdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089938.5954194-777-130575627326878/AnsiballZ_file.py'
Oct 10 09:52:19 compute-1 sudo[107644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:19 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0002f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:19 compute-1 ceph-mon[79167]: pgmap v183: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:52:19 compute-1 python3.9[107646]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:19 compute-1 sudo[107644]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:19 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:52:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:20.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:52:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:20.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:20 compute-1 sudo[107796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eknmfewfnxfguhbigzihbdujpbcgauka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089940.0309927-816-246564305365040/AnsiballZ_command.py'
Oct 10 09:52:20 compute-1 sudo[107796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:20 compute-1 python3.9[107798]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:52:20 compute-1 sudo[107796]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:21 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40091b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:21 compute-1 sudo[107950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omnuhylrhkkluwdpllokqkeagasyzaxu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760089940.9837506-840-10136455090719/AnsiballZ_edpm_nftables_from_files.py'
Oct 10 09:52:21 compute-1 sudo[107950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:21 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:21 compute-1 python3[107952]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 10 09:52:21 compute-1 ceph-mon[79167]: pgmap v184: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:21 compute-1 sudo[107950]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:21 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0002f20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:52:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:22.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:52:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:52:22 compute-1 sudo[108102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhwklpxzymovazaedmodbdkssnxqhmow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089941.966808-864-118339958191910/AnsiballZ_stat.py'
Oct 10 09:52:22 compute-1 sudo[108102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:22.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:22 compute-1 python3.9[108104]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:52:22 compute-1 sudo[108102]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:22 compute-1 sudo[108180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzynpbvcmwdsyvukushxhwthlmhhxyjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089941.966808-864-118339958191910/AnsiballZ_file.py'
Oct 10 09:52:22 compute-1 sudo[108180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:23 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:23 compute-1 python3.9[108182]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:23 compute-1 sudo[108180]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:23 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40091b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:23 compute-1 sudo[108333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpjnvvymrpgipinqcpxqgvjpjbrfvlcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089943.3344674-900-86806144253447/AnsiballZ_stat.py'
Oct 10 09:52:23 compute-1 sudo[108333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:23 compute-1 ceph-mon[79167]: pgmap v185: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:52:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:23 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:23 compute-1 python3.9[108335]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:52:23 compute-1 sudo[108333]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:24.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:24 compute-1 sudo[108411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jozhmqrywyaqkruqeugmxsygjgjnzbhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089943.3344674-900-86806144253447/AnsiballZ_file.py'
Oct 10 09:52:24 compute-1 sudo[108411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:24.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:24 compute-1 python3.9[108413]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:24 compute-1 sudo[108411]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:25 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:25 compute-1 sudo[108563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwrkledvozclotslpavbfomucwwywbxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089944.7153306-936-114350007202010/AnsiballZ_stat.py'
Oct 10 09:52:25 compute-1 sudo[108563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:25 compute-1 python3.9[108565]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:52:25 compute-1 sudo[108563]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:25 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:25 compute-1 sudo[108642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfuxqbtmrvcgjpxceftrwhajnkrrbzgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089944.7153306-936-114350007202010/AnsiballZ_file.py'
Oct 10 09:52:25 compute-1 sudo[108642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:25 compute-1 ceph-mon[79167]: pgmap v186: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:25 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba400a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:25 compute-1 python3.9[108644]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:25 compute-1 sudo[108642]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:26.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:26.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:26 compute-1 sudo[108794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxqztkzmnjhhhemyiybolmfpdbulrvsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089946.085525-972-241806745508114/AnsiballZ_stat.py'
Oct 10 09:52:26 compute-1 sudo[108794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:26 compute-1 python3.9[108796]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:52:26 compute-1 sudo[108794]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:26 compute-1 sudo[108872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmwgnfoirjdgwejnhgfhwqfgukldymvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089946.085525-972-241806745508114/AnsiballZ_file.py'
Oct 10 09:52:26 compute-1 sudo[108872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:27 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:52:27 compute-1 python3.9[108874]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:27 compute-1 sudo[108872]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:27 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:27 compute-1 ceph-mon[79167]: pgmap v187: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:27 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:27 compute-1 sudo[109025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klhgtegpcfptceukkxlhrcrynzmwqlgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089947.4633272-1008-262969563160373/AnsiballZ_stat.py'
Oct 10 09:52:27 compute-1 sudo[109025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:28.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:28 compute-1 python3.9[109027]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:52:28 compute-1 sudo[109025]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:28.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:28 compute-1 sudo[109103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhitmbwzueqckgluwttssgfnxvunekqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089947.4633272-1008-262969563160373/AnsiballZ_file.py'
Oct 10 09:52:28 compute-1 sudo[109103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:28 compute-1 python3.9[109105]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:28 compute-1 sudo[109103]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba400a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:29 compute-1 sudo[109256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrxyrcxihznoykauaomjriiybrhgdlzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089949.0699313-1047-63207626442082/AnsiballZ_command.py'
Oct 10 09:52:29 compute-1 sudo[109256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:29 compute-1 python3.9[109258]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:52:29 compute-1 sudo[109256]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:29 compute-1 ceph-mon[79167]: pgmap v188: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:52:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:30.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:30.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:30 compute-1 sudo[109411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfsqxirspxdqxgcorjnlxdyisvbycahl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089949.9347782-1071-247126097424072/AnsiballZ_blockinfile.py'
Oct 10 09:52:30 compute-1 sudo[109411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:30 compute-1 python3.9[109413]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:30 compute-1 sudo[109411]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:31 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:31 compute-1 sudo[109564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpcbqhdygadxzjnysvdqmncexfpazimz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089950.9121802-1098-133367211707411/AnsiballZ_file.py'
Oct 10 09:52:31 compute-1 sudo[109564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:31 compute-1 sudo[109565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:52:31 compute-1 sudo[109565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:52:31 compute-1 sudo[109565]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:31 compute-1 python3.9[109579]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:31 compute-1 sudo[109564]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:31 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba400a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:31 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:31 compute-1 ceph-mon[79167]: pgmap v189: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:31 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:52:32 compute-1 sudo[109742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtofrvjddnzbqmsgiauqsddlswmexqkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089951.6634421-1098-195746072391348/AnsiballZ_file.py'
Oct 10 09:52:32 compute-1 sudo[109742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:32.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:52:32 compute-1 python3.9[109744]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:32 compute-1 sudo[109742]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:32.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:32 compute-1 sudo[109894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oljafqmclusviqatqnsixqugquljiagb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089952.4729865-1143-210383567362071/AnsiballZ_mount.py'
Oct 10 09:52:32 compute-1 sudo[109894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:33 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:33 compute-1 python3.9[109896]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 10 09:52:33 compute-1 sudo[109894]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:33 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:33 compute-1 sudo[110048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axonawydqrsrpgvrjaubnxvclzwvqlzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089953.377787-1143-185368633582490/AnsiballZ_mount.py'
Oct 10 09:52:33 compute-1 sudo[110048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:33 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba400a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:33 compute-1 ceph-mon[79167]: pgmap v190: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:52:33 compute-1 python3.9[110050]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 10 09:52:34 compute-1 sudo[110048]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:52:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:34.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:52:34 compute-1 sshd-session[102722]: Connection closed by 192.168.122.30 port 60804
Oct 10 09:52:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:52:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:34.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:52:34 compute-1 sshd-session[102719]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:52:34 compute-1 systemd[1]: session-44.scope: Deactivated successfully.
Oct 10 09:52:34 compute-1 systemd[1]: session-44.scope: Consumed 37.359s CPU time.
Oct 10 09:52:34 compute-1 systemd-logind[789]: Session 44 logged out. Waiting for processes to exit.
Oct 10 09:52:34 compute-1 systemd-logind[789]: Removed session 44.
Oct 10 09:52:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:35 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:35 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:35 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:35 compute-1 ceph-mon[79167]: pgmap v191: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:36.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:36.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:37 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba400a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:52:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:37 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b680016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:37 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:37 compute-1 ceph-mon[79167]: pgmap v192: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:38.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:38.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:39 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:39 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba400a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:39 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b680016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:39 compute-1 ceph-mon[79167]: pgmap v193: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:52:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:40.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:40 compute-1 sshd-session[110079]: Accepted publickey for zuul from 192.168.122.30 port 38338 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:52:40 compute-1 systemd-logind[789]: New session 45 of user zuul.
Oct 10 09:52:40 compute-1 systemd[1]: Started Session 45 of User zuul.
Oct 10 09:52:40 compute-1 sshd-session[110079]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:52:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:40.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:40 compute-1 sudo[110232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbirqjmxnxgedsqmdpqnbakqyxmgtaxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089960.3746567-19-46852428826145/AnsiballZ_tempfile.py'
Oct 10 09:52:40 compute-1 sudo[110232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:41 compute-1 python3.9[110234]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 10 09:52:41 compute-1 sudo[110232]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:41 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:41 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:41 compute-1 sudo[110385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sglsepmillksuhsooxoizkijfkwdadau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089961.3018708-55-146471814260022/AnsiballZ_stat.py'
Oct 10 09:52:41 compute-1 sudo[110385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:41 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba400a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:41 compute-1 ceph-mon[79167]: pgmap v194: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:42 compute-1 python3.9[110387]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:52:42 compute-1 sudo[110385]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:42.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:52:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:42.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:42 compute-1 sudo[110539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jymdazilpkgrclfeodvclcfkbguhekqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089962.2521431-79-35766983679508/AnsiballZ_slurp.py'
Oct 10 09:52:42 compute-1 sudo[110539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:42 compute-1 python3.9[110541]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Oct 10 09:52:42 compute-1 sudo[110539]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:42 compute-1 ceph-mon[79167]: pgmap v195: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:52:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:43 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b680016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:43 compute-1 sudo[110692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crjiobjxwxugcplpbvjepokgjpohiywv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089963.1523638-103-112019410823580/AnsiballZ_stat.py'
Oct 10 09:52:43 compute-1 sudo[110692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:43 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:43 compute-1 python3.9[110694]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.81qdh614 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:52:43 compute-1 sudo[110692]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:43 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:44.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:44 compute-1 sudo[110817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjbqifgnuypbubuxhdsdlgzzzecfhotv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089963.1523638-103-112019410823580/AnsiballZ_copy.py'
Oct 10 09:52:44 compute-1 sudo[110817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:52:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:44.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:52:44 compute-1 python3.9[110819]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.81qdh614 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760089963.1523638-103-112019410823580/.source.81qdh614 _original_basename=.5g2hjn5u follow=False checksum=2d908d3ce99ab235b2c2751c9a38992c3c685672 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:44 compute-1 sudo[110817]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:44 compute-1 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 10 09:52:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:45 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba400a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:45 compute-1 ceph-mon[79167]: pgmap v196: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:45 compute-1 sudo[110972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-devehymleobkxkjywephvgcevektfops ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089964.8309772-148-29602509128355/AnsiballZ_setup.py'
Oct 10 09:52:45 compute-1 sudo[110972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:45 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:45 compute-1 python3.9[110974]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:52:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:45 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:45 compute-1 sudo[110972]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:46.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:46.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:46 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:52:46 compute-1 sudo[111124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxqriocmyuqvdudeiibqpdwrqffsmrls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089966.046227-173-95441001959464/AnsiballZ_blockinfile.py'
Oct 10 09:52:46 compute-1 sudo[111124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:46 compute-1 python3.9[111126]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCs576V3VvbSgv48Ml4JM3ripPY5VUVh8vdkDr1njjfd7J/WrQQkTf/D0b7+eGTXj3Y1fx1/haVrDafo7g0NqcSZX+zNUgTCnYPWafo7RMG4Q7ITVk1NPIkAC1cDUxHNeWhXaOkxCz96sTkO4aNW3uoFjsp2JkJtRJmHzT7q/bc0N9x7YcWh9vwRRBiOKlV8cWMHuHUzOlloEQLN67Dht1xHWr1eO/SITqUlWY13tc/54xQuo8nBQNNX9ArhMbJz2a9AoNVUAAYFF8hWFI5ES/GL9qsCp8dnmAtrY4Rc07QmHo1RkcjXe1f6D+vymRIP3YOqIjlWp0blCTfcCGno5lBa9f5JachIsogk+5+GYx4AAbWLyxxecfKzdCxrGnQlfFgldc1xDN1RG+8HwFEAuHQDWTCDUgF67FXSHy7aVxrdzU4046193/o3VKTpSaJmFldASxFgyUeujs56OgC0qYM0zKV4jOsMBcocVHvH/1FOPWIr81XXYvu6C/Ntd6sBj0=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGSf7pFS/S1SmUMk/yMobwR+LTaQZlAhBqo7Ido5r8dg
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB1l0EOuMseZ7ulHkfzzVtKv+5A9EWRy+oXVB+t370vohhJoN3+lviS8xoR8GttJUcHVCaeioniRtOWysbNdC0I=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUnwO+j5aInA4FKMx5pWF8B0Zp6L17GsYV5RBbu6iT67LtXjwbz5nP4EC7t80boMHnS7DRNCAxF0FNMVhQ9o4+1E1n2mrUxxAw8YxcZTabu/lAqRb4I6RzmXdXSA9mF8O3onswi/KhJg6YUTFEWCuxWrMLco15IatKi+hNqcRUk1DreR2L/YN0W5qXkvj1z3aoph1h3Yn1lRjuQDrVHp6lCywixC2pHwYG+CrPyX+0PkXJg+JRvRdxNCIw0D0zOkJrnppmT8XpIj42JLRUGGV592XFVXHiEhZdOI2bdzPy490EfIbWF9Symqi/V5vf8SK9LMOscHXkD7jsT6VKzsUXyk6/IzzZ2TzhD173lt8HpRJyaZq4ME0ZSVYNyD58DN/CQ3xpO1c1E8Wp4fUswc4WHmb/eILnY0lDXOZt6Hb/e+K6RHu5e5GOo0KSfei/LyrqJkBQn2P8UkbJvrUh2bNw+whjvT5CmXd3rPCw+Xq3/K3Gpit1K/4pC0zGC+CQr7E=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILklS4uW4IrGY5dWZTg4VeKVeFB3jPeUpu/8f4D1+rd5
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCelD2lLiMWT09YjxTI9IfdSnHfdMuHKAAEYFKZmJg34mgwUIDqUQqoc9I6a7Ps9pRizY+UpHWL//lD7hvvhD5k=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDarlOcgDXqRdSww3oIuqu7nGIBJToNGSnU1ljOr6GTlHTxxOoTztIrvZrPaJA8w/ixztkhFZZSdRPw4meYayY05CNu9SneiL62twzDLDsqeDPAspkh69Ljj5aGCLf6GJDiK0m2h1jLDIFtXH3lIQE9781zA7ZQ8+/xeF4yRS1/Fb5CXDG+oi/J0veCffs6t0TYmrUfSgS2H2y0UxNu7C6GoQKRde1arPLOYexvlg2RjlWM6Ex4JCqTAd9EN330Kh4HUr3r46ET8mwi1mPndibbiW0heXgrg8FeV5hBqOxQsGgLEKpX1cNAz6Rr0C5Hg1xfGcsJtep88vbJFmMyV1jNowDtJCYpprqa16Nj35HBuuz7zbzVlIdeQhEJ9I4I7eNhUxlb2/XYRXy2hfsrM9D2TP7B+bVPLjlqgqy8stBhGBCtH32ppNsXHE6uGPHMovcz2VhbP/P3sp9NQV+hF2Q0RbBXrQZkEI9YJdhxQw5hyOqwfPrEEBFy8FpzSKfBAW0=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIC1nQuW/lbxVJxo9H20J7i0+Z6cHtufrF4VbA6zs724f
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB0oTxSrAqx34tAubl7rouYPI7qhs6NhoDmGr3PTW1+mypEQw0EO+pZ99zSRnweC5RBoL080AgUKo7KN+v3LDHw=
                                              create=True mode=0644 path=/tmp/ansible.81qdh614 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:46 compute-1 sudo[111124]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:47 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:52:47 compute-1 ceph-mon[79167]: pgmap v197: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:47 compute-1 sudo[111277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvwbmohegjawsknolhijlgpnpvemorgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089966.979277-197-56816296512230/AnsiballZ_command.py'
Oct 10 09:52:47 compute-1 sudo[111277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:47 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba400a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:47 compute-1 python3.9[111279]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.81qdh614' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:52:47 compute-1 sudo[111277]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:47 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:52:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:48.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:52:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:48.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:48 compute-1 sudo[111431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frxnifomtdsdmwegqbxiomnnjhawbkdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089967.91802-221-272391849288047/AnsiballZ_file.py'
Oct 10 09:52:48 compute-1 sudo[111431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:48 compute-1 python3.9[111433]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.81qdh614 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:52:48 compute-1 sudo[111431]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:49 compute-1 sshd-session[110082]: Connection closed by 192.168.122.30 port 38338
Oct 10 09:52:49 compute-1 sshd-session[110079]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:52:49 compute-1 systemd-logind[789]: Session 45 logged out. Waiting for processes to exit.
Oct 10 09:52:49 compute-1 systemd[1]: session-45.scope: Deactivated successfully.
Oct 10 09:52:49 compute-1 systemd[1]: session-45.scope: Consumed 6.170s CPU time.
Oct 10 09:52:49 compute-1 systemd-logind[789]: Removed session 45.
Oct 10 09:52:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:49 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004430 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:49 compute-1 ceph-mon[79167]: pgmap v198: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:52:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:49 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:49 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba400a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:50.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:52:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:50.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:52:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:51 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:51 compute-1 sudo[111461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:52:51 compute-1 sudo[111461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:52:51 compute-1 sudo[111461]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:51 compute-1 ceph-mon[79167]: pgmap v199: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:51 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004450 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:51 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:52.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:52:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:52.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:53 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba400a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:53 compute-1 ceph-mon[79167]: pgmap v200: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:52:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:53 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:53 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004470 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:54 compute-1 sshd-session[111487]: Accepted publickey for zuul from 192.168.122.30 port 49988 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:52:54 compute-1 systemd-logind[789]: New session 46 of user zuul.
Oct 10 09:52:54 compute-1 systemd[1]: Started Session 46 of User zuul.
Oct 10 09:52:54 compute-1 sshd-session[111487]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:52:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:52:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:54.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:52:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:52:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:54.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:52:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:55 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:55 compute-1 python3.9[111640]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:52:55 compute-1 ceph-mon[79167]: pgmap v201: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:55 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba400a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:55 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:52:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:56.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:52:56 compute-1 sudo[111795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azwmenctxaizhwmczbenikurehxlpkst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089975.5768297-57-102390867749511/AnsiballZ_systemd.py'
Oct 10 09:52:56 compute-1 sudo[111795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:56.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:56 compute-1 python3.9[111797]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 10 09:52:56 compute-1 sudo[111795]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:57 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:52:57 compute-1 sudo[111949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoxbravlnukwsalqaxkirzxgtietbbtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089976.857676-81-125653000431301/AnsiballZ_systemd.py'
Oct 10 09:52:57 compute-1 sudo[111949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:57 compute-1 ceph-mon[79167]: pgmap v202: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:52:57 compute-1 python3.9[111951]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 09:52:57 compute-1 sudo[111949]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:57 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:57 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba400a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:52:58.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:58 compute-1 sudo[112103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onaaacklgknlcnboyxzzhassodxunnxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089977.8328993-108-16168838700684/AnsiballZ_command.py'
Oct 10 09:52:58 compute-1 sudo[112103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:52:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:52:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:52:58.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:52:58 compute-1 python3.9[112105]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:52:58.568489) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089978568559, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2149, "num_deletes": 251, "total_data_size": 6235914, "memory_usage": 6302056, "flush_reason": "Manual Compaction"}
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089978584828, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 2509325, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10700, "largest_seqno": 12844, "table_properties": {"data_size": 2503062, "index_size": 3206, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15654, "raw_average_key_size": 20, "raw_value_size": 2489405, "raw_average_value_size": 3195, "num_data_blocks": 143, "num_entries": 779, "num_filter_entries": 779, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089776, "oldest_key_time": 1760089776, "file_creation_time": 1760089978, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 16408 microseconds, and 10226 cpu microseconds.
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:52:58.584900) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 2509325 bytes OK
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:52:58.584929) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:52:58.586601) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:52:58.586624) EVENT_LOG_v1 {"time_micros": 1760089978586616, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:52:58.586647) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 6226380, prev total WAL file size 6226380, number of live WAL files 2.
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:52:58.589050) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(2450KB)], [21(12MB)]
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089978589075, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 15935412, "oldest_snapshot_seqno": -1}
Oct 10 09:52:58 compute-1 sudo[112103]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 4434 keys, 14286946 bytes, temperature: kUnknown
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089978650199, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 14286946, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14253052, "index_size": 21688, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11141, "raw_key_size": 111770, "raw_average_key_size": 25, "raw_value_size": 14167996, "raw_average_value_size": 3195, "num_data_blocks": 932, "num_entries": 4434, "num_filter_entries": 4434, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760089978, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:52:58.650586) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 14286946 bytes
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:52:58.652197) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 260.3 rd, 233.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 12.8 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(12.0) write-amplify(5.7) OK, records in: 4857, records dropped: 423 output_compression: NoCompression
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:52:58.652236) EVENT_LOG_v1 {"time_micros": 1760089978652218, "job": 10, "event": "compaction_finished", "compaction_time_micros": 61229, "compaction_time_cpu_micros": 27540, "output_level": 6, "num_output_files": 1, "total_output_size": 14286946, "num_input_records": 4857, "num_output_records": 4434, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089978653318, "job": 10, "event": "table_file_deletion", "file_number": 23}
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089978657735, "job": 10, "event": "table_file_deletion", "file_number": 21}
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:52:58.589014) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:52:58.657838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:52:58.657844) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:52:58.657846) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:52:58.657848) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:52:58 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:52:58.657850) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:52:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:59 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:59 compute-1 sudo[112257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwgxlcoeyqfjlbfjzdyjuzyqzstfbigt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089978.8292365-132-184589779489833/AnsiballZ_stat.py'
Oct 10 09:52:59 compute-1 sudo[112257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:52:59 compute-1 python3.9[112259]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:52:59 compute-1 sudo[112257]: pam_unix(sudo:session): session closed for user root
Oct 10 09:52:59 compute-1 ceph-mon[79167]: pgmap v203: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:52:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:59 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba00044b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:52:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:52:59 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:53:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:00.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:53:00 compute-1 sudo[112409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lokhafksadkpuhfypqocnmarjphtofrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089979.7904122-159-161454546301964/AnsiballZ_file.py'
Oct 10 09:53:00 compute-1 sudo[112409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:00.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:00 compute-1 python3.9[112411]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:53:00 compute-1 sudo[112409]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:00 compute-1 sshd-session[111490]: Connection closed by 192.168.122.30 port 49988
Oct 10 09:53:00 compute-1 sshd-session[111487]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:53:00 compute-1 systemd[1]: session-46.scope: Deactivated successfully.
Oct 10 09:53:00 compute-1 systemd[1]: session-46.scope: Consumed 5.007s CPU time.
Oct 10 09:53:00 compute-1 systemd-logind[789]: Session 46 logged out. Waiting for processes to exit.
Oct 10 09:53:00 compute-1 systemd-logind[789]: Removed session 46.
Oct 10 09:53:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:01 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba400a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:01 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:01 compute-1 ceph-mon[79167]: pgmap v204: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:53:01 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:53:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:01 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba00044d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:53:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:02.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:53:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:53:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:53:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:02.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:53:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:03 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:03 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba400a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:03 compute-1 ceph-mon[79167]: pgmap v205: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:53:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:03 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:04.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:04.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:05 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004580 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:05 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:05 compute-1 ceph-mon[79167]: pgmap v206: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:53:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:05 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:53:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:06.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:53:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:06.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095306 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 09:53:06 compute-1 sshd-session[112441]: Accepted publickey for zuul from 192.168.122.30 port 37418 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:53:06 compute-1 systemd-logind[789]: New session 47 of user zuul.
Oct 10 09:53:06 compute-1 systemd[1]: Started Session 47 of User zuul.
Oct 10 09:53:06 compute-1 sshd-session[112441]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:53:06 compute-1 sudo[112468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:53:06 compute-1 sudo[112468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:53:06 compute-1 sudo[112468]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:06 compute-1 sudo[112521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 09:53:06 compute-1 sudo[112521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:53:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:07 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:53:07 compute-1 sudo[112521]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:07 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004580 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:07 compute-1 ceph-mon[79167]: pgmap v207: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:53:07 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:53:07 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 09:53:07 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:53:07 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:53:07 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:53:07 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:53:07 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:53:07 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:53:07 compute-1 python3.9[112675]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:53:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:07 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:08.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:08.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:08 compute-1 sudo[112830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzueergpxekstqfzoeokhyqbrfhvfget ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089988.311711-63-85623475714659/AnsiballZ_setup.py'
Oct 10 09:53:08 compute-1 sudo[112830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:09 compute-1 python3.9[112832]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:53:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:09 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:09 compute-1 sudo[112830]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:09 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:09 compute-1 sudo[112915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xivolgbynchotdgluinkkeqqwkaeyevd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760089988.311711-63-85623475714659/AnsiballZ_dnf.py'
Oct 10 09:53:09 compute-1 sudo[112915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:09 compute-1 ceph-mon[79167]: pgmap v208: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:53:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:09 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:09 compute-1 python3.9[112917]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 10 09:53:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:53:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:10.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:53:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:10.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:11 compute-1 sudo[112915]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:11 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:11 compute-1 sudo[112996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:53:11 compute-1 sudo[112996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:53:11 compute-1 sudo[112996]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:11 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:11 compute-1 ceph-mon[79167]: pgmap v209: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:53:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:11 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:11 compute-1 python3.9[113094]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:53:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:12.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:53:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:12.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:13 compute-1 sudo[113195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 09:53:13 compute-1 sudo[113195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:53:13 compute-1 sudo[113195]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:13 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:13 compute-1 python3.9[113270]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 10 09:53:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:13 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:13 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b88002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:13 compute-1 ceph-mon[79167]: pgmap v210: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:53:13 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:53:13 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:53:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:53:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:14.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:53:14 compute-1 python3.9[113421]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:53:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:14.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:14 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Oct 10 09:53:14 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:53:14.966981) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 09:53:14 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Oct 10 09:53:14 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089994967016, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 429, "num_deletes": 251, "total_data_size": 564009, "memory_usage": 572648, "flush_reason": "Manual Compaction"}
Oct 10 09:53:14 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Oct 10 09:53:14 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089994971404, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 372836, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12849, "largest_seqno": 13273, "table_properties": {"data_size": 370373, "index_size": 563, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5935, "raw_average_key_size": 18, "raw_value_size": 365412, "raw_average_value_size": 1131, "num_data_blocks": 24, "num_entries": 323, "num_filter_entries": 323, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089979, "oldest_key_time": 1760089979, "file_creation_time": 1760089994, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Oct 10 09:53:14 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 4466 microseconds, and 2228 cpu microseconds.
Oct 10 09:53:14 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 09:53:14 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:53:14.971445) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 372836 bytes OK
Oct 10 09:53:14 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:53:14.971467) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Oct 10 09:53:14 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:53:14.973067) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Oct 10 09:53:14 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:53:14.973082) EVENT_LOG_v1 {"time_micros": 1760089994973077, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 09:53:14 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:53:14.973100) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 09:53:14 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 561283, prev total WAL file size 561283, number of live WAL files 2.
Oct 10 09:53:14 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 09:53:14 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:53:14.973681) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Oct 10 09:53:14 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 09:53:14 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(364KB)], [24(13MB)]
Oct 10 09:53:14 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089994973725, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 14659782, "oldest_snapshot_seqno": -1}
Oct 10 09:53:14 compute-1 ceph-mon[79167]: pgmap v211: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:53:15 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 4242 keys, 12701680 bytes, temperature: kUnknown
Oct 10 09:53:15 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089995032582, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 12701680, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12670785, "index_size": 19201, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10629, "raw_key_size": 108719, "raw_average_key_size": 25, "raw_value_size": 12590731, "raw_average_value_size": 2968, "num_data_blocks": 813, "num_entries": 4242, "num_filter_entries": 4242, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760089994, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Oct 10 09:53:15 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 09:53:15 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:53:15.032851) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 12701680 bytes
Oct 10 09:53:15 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:53:15.033877) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 248.7 rd, 215.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 13.6 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(73.4) write-amplify(34.1) OK, records in: 4757, records dropped: 515 output_compression: NoCompression
Oct 10 09:53:15 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:53:15.033910) EVENT_LOG_v1 {"time_micros": 1760089995033896, "job": 12, "event": "compaction_finished", "compaction_time_micros": 58943, "compaction_time_cpu_micros": 37295, "output_level": 6, "num_output_files": 1, "total_output_size": 12701680, "num_input_records": 4757, "num_output_records": 4242, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 09:53:15 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 09:53:15 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089995034161, "job": 12, "event": "table_file_deletion", "file_number": 26}
Oct 10 09:53:15 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 09:53:15 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760089995039224, "job": 12, "event": "table_file_deletion", "file_number": 24}
Oct 10 09:53:15 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:53:14.973520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:53:15 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:53:15.039294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:53:15 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:53:15.039302) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:53:15 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:53:15.039304) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:53:15 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:53:15.039306) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:53:15 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-09:53:15.039308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 09:53:15 compute-1 python3.9[113571]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:53:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:15 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:15 compute-1 sshd-session[112444]: Connection closed by 192.168.122.30 port 37418
Oct 10 09:53:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:15 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba00048c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:15 compute-1 sshd-session[112441]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:53:15 compute-1 systemd[1]: session-47.scope: Deactivated successfully.
Oct 10 09:53:15 compute-1 systemd[1]: session-47.scope: Consumed 6.729s CPU time.
Oct 10 09:53:15 compute-1 systemd-logind[789]: Session 47 logged out. Waiting for processes to exit.
Oct 10 09:53:15 compute-1 systemd-logind[789]: Removed session 47.
Oct 10 09:53:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:15 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 09:53:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:15 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:16.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:53:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:16.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:17 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b880034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:53:17 compute-1 ceph-mon[79167]: pgmap v212: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:53:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:17 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c0041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:17 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba00048e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:18.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:53:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:18.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:53:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:18 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 09:53:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:18 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 09:53:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:19 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba00048e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:19 compute-1 ceph-mon[79167]: pgmap v213: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:53:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:19 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b880034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:19 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c0041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:53:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:20.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:53:20 compute-1 sshd-session[113599]: Accepted publickey for zuul from 192.168.122.30 port 37252 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:53:20 compute-1 systemd-logind[789]: New session 48 of user zuul.
Oct 10 09:53:20 compute-1 systemd[1]: Started Session 48 of User zuul.
Oct 10 09:53:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:53:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:20.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:53:20 compute-1 sshd-session[113599]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:53:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:21 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:21 compute-1 ceph-mon[79167]: pgmap v214: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:53:21 compute-1 python3.9[113753]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:53:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:21 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004900 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:21 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 09:53:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:21 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b880034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:53:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:22.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:53:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:53:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:53:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:22.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:53:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:23 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c0041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:23 compute-1 sudo[113907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rybqbgcjezdnsfhpgxjabuonwkqegokw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090002.7432115-111-12954686068450/AnsiballZ_file.py'
Oct 10 09:53:23 compute-1 sudo[113907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:23 compute-1 python3.9[113909]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:53:23 compute-1 sudo[113907]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:23 compute-1 ceph-mon[79167]: pgmap v215: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:53:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:23 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:23 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:23 compute-1 sudo[114060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcloihfthpkekwjkhogwrgqwtxjormgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090003.598484-111-170628546822091/AnsiballZ_file.py'
Oct 10 09:53:23 compute-1 sudo[114060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:24.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:24 compute-1 python3.9[114062]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:53:24 compute-1 sudo[114060]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:24.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:24 compute-1 sudo[114212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvjsscyydbkjytyhqqdcyvodfftdqgog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090004.404119-159-218216821496914/AnsiballZ_stat.py'
Oct 10 09:53:24 compute-1 sudo[114212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:25 compute-1 python3.9[114214]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:53:25 compute-1 sudo[114212]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:25 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b880034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:25 compute-1 ceph-mon[79167]: pgmap v216: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:53:25 compute-1 sudo[114336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyxkinvpmzfhcapoaimvuyzlaargnkyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090004.404119-159-218216821496914/AnsiballZ_copy.py'
Oct 10 09:53:25 compute-1 sudo[114336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:25 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c0041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:25 compute-1 python3.9[114338]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090004.404119-159-218216821496914/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=fc301ff04c1bdbf67ce21f61b2409e6eab9f5113 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:53:25 compute-1 sudo[114336]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:25 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:53:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:26.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:53:26 compute-1 sudo[114488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpnmqyioswqeupfrpqllofdkaiinnswi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090005.944469-159-266128578410019/AnsiballZ_stat.py'
Oct 10 09:53:26 compute-1 sudo[114488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:26.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:26 compute-1 python3.9[114490]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:53:26 compute-1 sudo[114488]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:26 compute-1 sudo[114611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apvudjmhmhlriknfpioehxfhutcuhvup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090005.944469-159-266128578410019/AnsiballZ_copy.py'
Oct 10 09:53:26 compute-1 sudo[114611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:27 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004940 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:27 compute-1 python3.9[114613]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090005.944469-159-266128578410019/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=6d432417c0c3c485924638569c72973f4b3272fb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:53:27 compute-1 sudo[114611]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:53:27 compute-1 ceph-mon[79167]: pgmap v217: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:53:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:27 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b880034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:27 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c0041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:27 compute-1 sudo[114764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgflmnbdtzeqesvbbdbxduujmqxyhunp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090007.5264344-159-1861374170953/AnsiballZ_stat.py'
Oct 10 09:53:27 compute-1 sudo[114764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:28 compute-1 python3.9[114766]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:53:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:53:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:28.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:53:28 compute-1 sudo[114764]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:28.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095328 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 09:53:28 compute-1 sudo[114887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucjchazdvovcmaiwiwqrvcjwzcowndga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090007.5264344-159-1861374170953/AnsiballZ_copy.py'
Oct 10 09:53:28 compute-1 sudo[114887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:28 compute-1 python3.9[114889]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090007.5264344-159-1861374170953/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=0788f60270857301d82728379b3c6f1e054161c8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:53:28 compute-1 sudo[114887]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:29 compute-1 ceph-mon[79167]: pgmap v218: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:53:29 compute-1 sudo[115040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cunktdzhsteajgugaqbiuxmqngcbhebz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090009.2191184-303-119198441080255/AnsiballZ_file.py'
Oct 10 09:53:29 compute-1 sudo[115040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004960 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:29 compute-1 python3.9[115042]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:53:29 compute-1 sudo[115040]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:29 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b880034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:30.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:30 compute-1 sudo[115192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srmevrnbkfymmluslafhemtjvskthnqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090009.9923875-303-99967996036771/AnsiballZ_file.py'
Oct 10 09:53:30 compute-1 sudo[115192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:30.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:30 compute-1 python3.9[115194]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:53:30 compute-1 sudo[115192]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:31 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c0041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:31 compute-1 sudo[115345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohglvbxednnrcfaevjjlatqzygvbmbow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090011.0119429-355-29036934832909/AnsiballZ_stat.py'
Oct 10 09:53:31 compute-1 sudo[115345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:31 compute-1 python3.9[115347]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:53:31 compute-1 sudo[115345]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:31 compute-1 ceph-mon[79167]: pgmap v219: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:53:31 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:53:31 compute-1 sudo[115348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:53:31 compute-1 sudo[115348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:53:31 compute-1 sudo[115348]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:31 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:31 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004980 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:32 compute-1 sudo[115493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvixrscczqzaiqizifmtjwdhugpwgyrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090011.0119429-355-29036934832909/AnsiballZ_copy.py'
Oct 10 09:53:32 compute-1 sudo[115493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:32.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:53:32 compute-1 python3.9[115495]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090011.0119429-355-29036934832909/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=02480a739908564efbb8591bd6a1d73205710dc7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:53:32 compute-1 sudo[115493]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:32.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:32 compute-1 sudo[115645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwqrdereusujbjuxzvvbpgbdsssboalm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090012.4455447-355-55450022385589/AnsiballZ_stat.py'
Oct 10 09:53:32 compute-1 sudo[115645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:33 compute-1 python3.9[115647]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:53:33 compute-1 sudo[115645]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:33 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b880034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:33 compute-1 sudo[115769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zysyvofilbakmlrgfatczxortsnlcpgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090012.4455447-355-55450022385589/AnsiballZ_copy.py'
Oct 10 09:53:33 compute-1 sudo[115769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:33 compute-1 ceph-mon[79167]: pgmap v220: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 10 09:53:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:33 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c0041a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:33 compute-1 python3.9[115771]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090012.4455447-355-55450022385589/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=abcc61006dfeb8ab87ea24afb3b53290e7b990dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:53:33 compute-1 sudo[115769]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:33 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:34.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:34 compute-1 sudo[115921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktmpknmiyurgchnjyowgcsphbwtxcljr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090013.907635-355-141073060896331/AnsiballZ_stat.py'
Oct 10 09:53:34 compute-1 sudo[115921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:34 compute-1 python3.9[115923]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:53:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:34.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:34 compute-1 sudo[115921]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:35 compute-1 sudo[116044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-invjklsxpldpnnrdpuwuaovrsppgrpxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090013.907635-355-141073060896331/AnsiballZ_copy.py'
Oct 10 09:53:35 compute-1 sudo[116044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:35 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba00049a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:35 compute-1 python3.9[116046]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090013.907635-355-141073060896331/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=f29b3f60c4947f05538559980518c0fcc28c88a6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:53:35 compute-1 sudo[116044]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:35 compute-1 ceph-mon[79167]: pgmap v221: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:53:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:35 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b880034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:35 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:35 compute-1 sudo[116198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taaqgdymahpzzliwslfsshdayohpgrap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090015.537932-505-54384343012120/AnsiballZ_file.py'
Oct 10 09:53:35 compute-1 sudo[116198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:36 compute-1 python3.9[116200]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:53:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:36.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:36 compute-1 sudo[116198]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:53:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:36.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:53:36 compute-1 sudo[116350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amoaekoelkbuuomdroebxalipzyyoeyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090016.3190644-505-247321630884969/AnsiballZ_file.py'
Oct 10 09:53:36 compute-1 sudo[116350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:36 compute-1 python3.9[116352]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:53:36 compute-1 sudo[116350]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:37 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:53:37 compute-1 sudo[116504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onkziagqykkrqazffmenbaxadnatwsek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090017.0920355-553-276572821215870/AnsiballZ_stat.py'
Oct 10 09:53:37 compute-1 sudo[116504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:37 compute-1 ceph-mon[79167]: pgmap v222: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:53:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:37 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba00049c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:37 compute-1 python3.9[116506]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:53:37 compute-1 sudo[116504]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:37 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:53:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:38.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:53:38 compute-1 sudo[116627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crjvumtamulssvkmvghmiuqpfnpxetyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090017.0920355-553-276572821215870/AnsiballZ_copy.py'
Oct 10 09:53:38 compute-1 sudo[116627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:38 compute-1 python3.9[116629]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090017.0920355-553-276572821215870/.source.crt _original_basename=compute-1.ctlplane.example.com-tls.crt follow=False checksum=2eaad5181c478c56c6664f5d92519151a29ae939 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:53:38 compute-1 sudo[116627]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:38.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:38 compute-1 sudo[116779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdfiapsvomacbuacuomarirxmwokjmua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090018.5802143-553-76385888343045/AnsiballZ_stat.py'
Oct 10 09:53:38 compute-1 sudo[116779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:39 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:39 compute-1 python3.9[116781]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:53:39 compute-1 sudo[116779]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:39 compute-1 sudo[116903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cunjegcateezaujlisiudjozhjauavqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090018.5802143-553-76385888343045/AnsiballZ_copy.py'
Oct 10 09:53:39 compute-1 sudo[116903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:39 compute-1 ceph-mon[79167]: pgmap v223: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:53:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:39 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:39 compute-1 python3.9[116905]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090018.5802143-553-76385888343045/.source.crt _original_basename=compute-1.ctlplane.example.com-ca.crt follow=False checksum=abcc61006dfeb8ab87ea24afb3b53290e7b990dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:53:39 compute-1 sudo[116903]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:39 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba00049c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:40.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:40 compute-1 sudo[117055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zybjpemofhxavzwohmomeuefaqnmytes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090019.9343321-553-230285406189437/AnsiballZ_stat.py'
Oct 10 09:53:40 compute-1 sudo[117055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:40.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:40 compute-1 python3.9[117057]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:53:40 compute-1 sudo[117055]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:40 compute-1 sudo[117178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhuvqfzgzwssxsyfvagwllwbhpmqaous ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090019.9343321-553-230285406189437/AnsiballZ_copy.py'
Oct 10 09:53:40 compute-1 sudo[117178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:41 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:41 compute-1 python3.9[117180]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090019.9343321-553-230285406189437/.source.key _original_basename=compute-1.ctlplane.example.com-tls.key follow=False checksum=ad198db31845dce8dbb361567f3eab9b32ae6934 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:53:41 compute-1 sudo[117178]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:41 compute-1 ceph-mon[79167]: pgmap v224: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:53:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:41 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:41 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:42.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:53:42 compute-1 sudo[117331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrpgelfejkzuztssorrkdkvlhdogirhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090021.924124-752-4367879622551/AnsiballZ_file.py'
Oct 10 09:53:42 compute-1 sudo[117331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:42 compute-1 python3.9[117333]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:53:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:42.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:42 compute-1 sudo[117331]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:43 compute-1 sudo[117483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvkdhhzbsecrtsiiqupubmdwgfrergdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090022.680377-781-79825467494619/AnsiballZ_stat.py'
Oct 10 09:53:43 compute-1 sudo[117483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:43 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba00049c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:43 compute-1 python3.9[117485]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:53:43 compute-1 sudo[117483]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:43 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:43 compute-1 ceph-mon[79167]: pgmap v225: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:53:43 compute-1 sudo[117607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbroldsigqhyzkxfhyrwknuhelshcfhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090022.680377-781-79825467494619/AnsiballZ_copy.py'
Oct 10 09:53:43 compute-1 sudo[117607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:43 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:43 compute-1 python3.9[117609]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090022.680377-781-79825467494619/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=588de6fcfc4f8f2f1febb9ce163ed2886e4b0ed4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:53:43 compute-1 sudo[117607]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:44.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:44.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:44 compute-1 sudo[117759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rosgvogfhollqmnfmkabsghxlxmhmshv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090024.1861262-830-146982811216761/AnsiballZ_file.py'
Oct 10 09:53:44 compute-1 sudo[117759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:44 compute-1 python3.9[117761]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:53:44 compute-1 sudo[117759]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:45 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:45 compute-1 sudo[117912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqxtvuaoihxiqrjecsspwoksnpgigcnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090025.0293465-856-113054238319818/AnsiballZ_stat.py'
Oct 10 09:53:45 compute-1 sudo[117912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:45 compute-1 python3.9[117914]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:53:45 compute-1 sudo[117912]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:45 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba00049c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:45 compute-1 ceph-mon[79167]: pgmap v226: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:53:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:45 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:46 compute-1 sudo[118035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewjzlzvsgjlulfaasyqdrtokzznzbzhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090025.0293465-856-113054238319818/AnsiballZ_copy.py'
Oct 10 09:53:46 compute-1 sudo[118035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:53:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:46.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:53:46 compute-1 python3.9[118037]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090025.0293465-856-113054238319818/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=588de6fcfc4f8f2f1febb9ce163ed2886e4b0ed4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:53:46 compute-1 sudo[118035]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:46.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:46 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:53:46 compute-1 sudo[118187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yznwmmcdfqhabelwwjksarertiaswyxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090026.5864103-909-154772097679781/AnsiballZ_file.py'
Oct 10 09:53:46 compute-1 sudo[118187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:47 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:47 compute-1 python3.9[118189]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:53:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:53:47 compute-1 sudo[118187]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:47 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004020 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:47 compute-1 ceph-mon[79167]: pgmap v227: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:53:47 compute-1 sudo[118340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkivykyvtmteqomehfdxikiplgkbmusy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090027.4078746-938-121994815636879/AnsiballZ_stat.py'
Oct 10 09:53:47 compute-1 sudo[118340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:47 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba00049c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:47 compute-1 python3.9[118342]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:53:47 compute-1 sudo[118340]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:48.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:48 compute-1 sudo[118463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skzubmxbmoywoqbpkrgewudogqqasruk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090027.4078746-938-121994815636879/AnsiballZ_copy.py'
Oct 10 09:53:48 compute-1 sudo[118463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:48.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:48 compute-1 python3.9[118465]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090027.4078746-938-121994815636879/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=588de6fcfc4f8f2f1febb9ce163ed2886e4b0ed4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:53:48 compute-1 sudo[118463]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:49 compute-1 sudo[118615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbxzqpudewfkjyygxxixhurlvvedywub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090028.7979689-986-67919021198383/AnsiballZ_file.py'
Oct 10 09:53:49 compute-1 sudo[118615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:49 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba4009740 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:49 compute-1 python3.9[118617]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:53:49 compute-1 sudo[118615]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:49 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:49 compute-1 ceph-mon[79167]: pgmap v228: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:53:49 compute-1 sudo[118768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umahbrsyggnyvlsksyyggfthsrjidduy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090029.5289757-1013-185673998754397/AnsiballZ_stat.py'
Oct 10 09:53:49 compute-1 sudo[118768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:49 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004040 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:50 compute-1 python3.9[118770]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:53:50 compute-1 sudo[118768]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:50.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:50.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:50 compute-1 sudo[118891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xebogmpzmsvhnsojvkpjzzhtkjrrnznv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090029.5289757-1013-185673998754397/AnsiballZ_copy.py'
Oct 10 09:53:50 compute-1 sudo[118891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:50 compute-1 python3.9[118893]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090029.5289757-1013-185673998754397/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=588de6fcfc4f8f2f1febb9ce163ed2886e4b0ed4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:53:51 compute-1 sudo[118891]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:51 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba00049e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:51 compute-1 sudo[119044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pctpgbraaehusnwpduszyoxrfniuhnwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090031.2509336-1054-32990410435676/AnsiballZ_file.py'
Oct 10 09:53:51 compute-1 sudo[119044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:51 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba4009740 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:51 compute-1 sudo[119047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:53:51 compute-1 sudo[119047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:53:51 compute-1 sudo[119047]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:51 compute-1 ceph-mon[79167]: pgmap v229: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:53:51 compute-1 python3.9[119046]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:53:51 compute-1 sudo[119044]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:51 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:52.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:53:52 compute-1 sudo[119221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rinwvtbjmzxspvanvuecwqywpelkbznm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090032.0309086-1069-156904704374440/AnsiballZ_stat.py'
Oct 10 09:53:52 compute-1 sudo[119221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:52.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:52 compute-1 python3.9[119223]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:53:52 compute-1 sudo[119221]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:53 compute-1 sudo[119344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhepbvbiekriohroukupxdedfzgyexps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090032.0309086-1069-156904704374440/AnsiballZ_copy.py'
Oct 10 09:53:53 compute-1 sudo[119344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:53 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004060 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:53 compute-1 python3.9[119346]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090032.0309086-1069-156904704374440/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=588de6fcfc4f8f2f1febb9ce163ed2886e4b0ed4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:53:53 compute-1 sudo[119344]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:53 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004a00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:53 compute-1 ceph-mon[79167]: pgmap v230: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:53:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:53 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba4009740 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:54 compute-1 sudo[119497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eifclgemfxuazgwjtvgmjtrfowhdlpbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090033.4894433-1095-276832414453862/AnsiballZ_file.py'
Oct 10 09:53:54 compute-1 sudo[119497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:54.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:54 compute-1 python3.9[119499]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:53:54 compute-1 sudo[119497]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:54.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:54 compute-1 sshd-session[119500]: Invalid user  from 196.251.73.199 port 46780
Oct 10 09:53:54 compute-1 sudo[119651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkardqplvzpbaptnmlmsxioxeoztozgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090034.4942918-1103-172224008197639/AnsiballZ_stat.py'
Oct 10 09:53:54 compute-1 sudo[119651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:55 compute-1 python3.9[119653]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:53:55 compute-1 sudo[119651]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:55 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:55 compute-1 sudo[119775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujlqphyoflspzhfuojdfkdzycrmjtkfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090034.4942918-1103-172224008197639/AnsiballZ_copy.py'
Oct 10 09:53:55 compute-1 sudo[119775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:53:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:55 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:55 compute-1 python3.9[119777]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090034.4942918-1103-172224008197639/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=588de6fcfc4f8f2f1febb9ce163ed2886e4b0ed4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:53:55 compute-1 sudo[119775]: pam_unix(sudo:session): session closed for user root
Oct 10 09:53:55 compute-1 ceph-mon[79167]: pgmap v231: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:53:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:55 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004a20 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:56 compute-1 sshd-session[113602]: Connection closed by 192.168.122.30 port 37252
Oct 10 09:53:56 compute-1 sshd-session[113599]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:53:56 compute-1 systemd[1]: session-48.scope: Deactivated successfully.
Oct 10 09:53:56 compute-1 systemd[1]: session-48.scope: Consumed 28.468s CPU time.
Oct 10 09:53:56 compute-1 systemd-logind[789]: Session 48 logged out. Waiting for processes to exit.
Oct 10 09:53:56 compute-1 systemd-logind[789]: Removed session 48.
Oct 10 09:53:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:53:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:56.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:53:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:56.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:57 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40098e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:53:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:57 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:57 compute-1 ceph-mon[79167]: pgmap v232: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:53:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:57 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b800040a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:53:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:53:58.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:53:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:53:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:53:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:53:58.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:53:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:59 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004a40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:59 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40098e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:53:59 compute-1 ceph-mon[79167]: pgmap v233: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:53:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:53:59 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:00.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:00.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:01 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b800040c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:01 compute-1 sshd-session[119500]: Connection closed by invalid user  196.251.73.199 port 46780 [preauth]
Oct 10 09:54:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:01 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004a60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:01 compute-1 ceph-mon[79167]: pgmap v234: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:54:01 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:54:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:01 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40098e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:02 compute-1 sshd-session[119805]: Accepted publickey for zuul from 192.168.122.30 port 34740 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:54:02 compute-1 systemd-logind[789]: New session 49 of user zuul.
Oct 10 09:54:02 compute-1 systemd[1]: Started Session 49 of User zuul.
Oct 10 09:54:02 compute-1 sshd-session[119805]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:54:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:54:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:02.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:54:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:54:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:02.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:02 compute-1 sudo[119958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqktfenqsaeotyjkuhnsdzgbklxvtcav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090042.1614926-27-104129569973601/AnsiballZ_file.py'
Oct 10 09:54:02 compute-1 sudo[119958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:03 compute-1 python3.9[119960]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:03 compute-1 sudo[119958]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:03 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:03 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b800040e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:03 compute-1 sudo[120111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqhfenpspnynyqsqvkvlpazefofrnkrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090043.2978232-63-90547915940566/AnsiballZ_stat.py'
Oct 10 09:54:03 compute-1 sudo[120111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:03 compute-1 ceph-mon[79167]: pgmap v235: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:54:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:03 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004a80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:04 compute-1 python3.9[120113]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:54:04 compute-1 sudo[120111]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:04.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:04.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:04 compute-1 sudo[120234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stcttsngqpufithiolbxxveozgicyznx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090043.2978232-63-90547915940566/AnsiballZ_copy.py'
Oct 10 09:54:04 compute-1 sudo[120234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:04 compute-1 python3.9[120236]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090043.2978232-63-90547915940566/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=f4f20d3bcbb08befb7837fd0e595f186c33a7cc2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:04 compute-1 sudo[120234]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:05 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40098e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:05 compute-1 sudo[120387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jklcobhuwewywgcaizhdfcfvvizsgnrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090045.0966501-63-277377550083821/AnsiballZ_stat.py'
Oct 10 09:54:05 compute-1 sudo[120387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:05 compute-1 python3.9[120389]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:54:05 compute-1 sudo[120387]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:05 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b68003630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:05 compute-1 ceph-mon[79167]: pgmap v236: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:54:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:05 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004100 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:06 compute-1 sudo[120510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjeobfmngadjtvceicfczaxvuskkvgbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090045.0966501-63-277377550083821/AnsiballZ_copy.py'
Oct 10 09:54:06 compute-1 sudo[120510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:54:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:06.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:54:06 compute-1 python3.9[120512]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090045.0966501-63-277377550083821/.source.conf _original_basename=ceph.conf follow=False checksum=1a4b9adde8f120db415fb0ad56382b109e0fedc1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:06 compute-1 sudo[120510]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:06.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:06 compute-1 sshd-session[119808]: Connection closed by 192.168.122.30 port 34740
Oct 10 09:54:06 compute-1 sshd-session[119805]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:54:06 compute-1 systemd[1]: session-49.scope: Deactivated successfully.
Oct 10 09:54:06 compute-1 systemd[1]: session-49.scope: Consumed 3.404s CPU time.
Oct 10 09:54:06 compute-1 systemd-logind[789]: Session 49 logged out. Waiting for processes to exit.
Oct 10 09:54:06 compute-1 systemd-logind[789]: Removed session 49.
Oct 10 09:54:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:07 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004aa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:54:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:07 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40098e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:07 compute-1 ceph-mon[79167]: pgmap v237: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:54:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:07 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.002000054s ======
Oct 10 09:54:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:08.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Oct 10 09:54:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:08.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:09 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b800041b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:09 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004aa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:09 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b880014b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:09 compute-1 ceph-mon[79167]: pgmap v238: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:54:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:10.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:54:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:10.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:54:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:11 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b7c002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:11 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b800041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:11 compute-1 sudo[120542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:54:11 compute-1 sudo[120542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:54:11 compute-1 sudo[120542]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:11 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004aa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:11 compute-1 ceph-mon[79167]: pgmap v239: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:54:11 compute-1 sshd-session[120567]: Accepted publickey for zuul from 192.168.122.30 port 54950 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:54:12 compute-1 systemd-logind[789]: New session 50 of user zuul.
Oct 10 09:54:12 compute-1 systemd[1]: Started Session 50 of User zuul.
Oct 10 09:54:12 compute-1 sshd-session[120567]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:54:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:12.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:54:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:54:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:12.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:54:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:13 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004aa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:13 compute-1 python3.9[120720]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:54:13 compute-1 sudo[120723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:54:13 compute-1 sudo[120723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:54:13 compute-1 sudo[120723]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:13 compute-1 sudo[120752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Oct 10 09:54:13 compute-1 sudo[120752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:54:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:13 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004aa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:13 compute-1 sudo[120752]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:13 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40098e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:13 compute-1 ceph-mon[79167]: pgmap v240: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:54:13 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:54:13 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:54:13 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:54:13 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:54:13 compute-1 sudo[120872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:54:13 compute-1 sudo[120872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:54:13 compute-1 sudo[120872]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:14 compute-1 sudo[120898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 09:54:14 compute-1 sudo[120898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:54:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:14.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:14 compute-1 sudo[121010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-depwvtoqiufqekvkoyedlqqxhrepikfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090053.7935746-63-25193876616767/AnsiballZ_file.py'
Oct 10 09:54:14 compute-1 sudo[121010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:14.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:14 compute-1 python3.9[121012]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:54:14 compute-1 sudo[121010]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:14 compute-1 sudo[120898]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:14 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:54:14 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:54:14 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:54:14 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:54:14 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:54:14 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:54:14 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:54:15 compute-1 sudo[121179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqrmfxogyetovgeafrqwvexmkjiyjnce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090054.7636623-63-74637878939188/AnsiballZ_file.py'
Oct 10 09:54:15 compute-1 sudo[121179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:15 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b6c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:15 compute-1 python3.9[121181]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:54:15 compute-1 sudo[121179]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:15 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004210 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:15 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004aa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095415 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 09:54:15 compute-1 ceph-mon[79167]: pgmap v241: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:54:16 compute-1 python3.9[121332]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:54:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:16.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:16.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:54:16 compute-1 ceph-mon[79167]: pgmap v242: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:54:17 compute-1 sudo[121482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vysahxypftowwaxndieqswrgwtepaipx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090056.3711905-132-181578618737832/AnsiballZ_seboolean.py'
Oct 10 09:54:17 compute-1 sudo[121482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:17 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40098e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:54:17 compute-1 python3.9[121484]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 10 09:54:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:17 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b6c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:17 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004230 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:18.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:18 compute-1 sudo[121482]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:18.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:19 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba0004aa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:19 compute-1 ceph-mon[79167]: pgmap v243: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:54:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:19 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40098e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:19 compute-1 sudo[121640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojaianhjeulfdpncoktpabkaqprgrtad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090059.3293583-162-38160733804648/AnsiballZ_setup.py'
Oct 10 09:54:19 compute-1 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Oct 10 09:54:19 compute-1 sudo[121640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:19 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6ba40098e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:20 compute-1 sudo[121643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 09:54:20 compute-1 sudo[121643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:54:20 compute-1 sudo[121643]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:20 compute-1 python3.9[121642]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:54:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:20.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:20 compute-1 sudo[121640]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:20.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:20 compute-1 sudo[121749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnqnfkbqltviamrarpcwbrarqlyhajtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090059.3293583-162-38160733804648/AnsiballZ_dnf.py'
Oct 10 09:54:20 compute-1 sudo[121749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:20 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:54:20 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:54:20 compute-1 python3.9[121751]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:54:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[100771]: 10/10/2025 09:54:21 : epoch 68e8d715 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6b80004250 fd 39 proxy ignored for local
Oct 10 09:54:21 compute-1 kernel: ganesha.nfsd[102000]: segfault at 50 ip 00007f6c56b7e32e sp 00007f6c2cff8210 error 4 in libntirpc.so.5.8[7f6c56b63000+2c000] likely on CPU 2 (core 0, socket 2)
Oct 10 09:54:21 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 09:54:21 compute-1 systemd[1]: Started Process Core Dump (PID 121753/UID 0).
Oct 10 09:54:21 compute-1 ceph-mon[79167]: pgmap v244: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:54:22 compute-1 sudo[121749]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:22.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:54:22 compute-1 systemd-coredump[121754]: Process 100776 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 44:
                                                    #0  0x00007f6c56b7e32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 10 09:54:22 compute-1 systemd[1]: systemd-coredump@1-121753-0.service: Deactivated successfully.
Oct 10 09:54:22 compute-1 systemd[1]: systemd-coredump@1-121753-0.service: Consumed 1.226s CPU time.
Oct 10 09:54:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:22.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:22 compute-1 podman[121815]: 2025-10-10 09:54:22.570694677 +0000 UTC m=+0.048686688 container died 38469aeeacb4e5fd5cce3c07da0fa2ff7ec854adc34a8c8ac6ec34fa6024b1ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 10 09:54:22 compute-1 systemd[1]: var-lib-containers-storage-overlay-3bdcad42e292001657325bb58d2d66242ee3ebf8e20268f3dc10a8f21749e3ac-merged.mount: Deactivated successfully.
Oct 10 09:54:22 compute-1 podman[121815]: 2025-10-10 09:54:22.62767687 +0000 UTC m=+0.105668841 container remove 38469aeeacb4e5fd5cce3c07da0fa2ff7ec854adc34a8c8ac6ec34fa6024b1ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:54:22 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Main process exited, code=exited, status=139/n/a
Oct 10 09:54:22 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Failed with result 'exit-code'.
Oct 10 09:54:22 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 2.051s CPU time.
Oct 10 09:54:23 compute-1 sudo[121953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrbjrktvwhpnucdilhuiswezdbglnnef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090062.4638443-198-23870679104790/AnsiballZ_systemd.py'
Oct 10 09:54:23 compute-1 sudo[121953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:23 compute-1 python3.9[121955]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 09:54:23 compute-1 sudo[121953]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:23 compute-1 ceph-mon[79167]: pgmap v245: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:54:24 compute-1 sudo[122109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcfldsriayyhnkwxfrpgzkpwgnjsqshi ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760090063.7324364-222-251485447874321/AnsiballZ_edpm_nftables_snippet.py'
Oct 10 09:54:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:24.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:24 compute-1 sudo[122109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:24 compute-1 python3[122111]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct 10 09:54:24 compute-1 sudo[122109]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:54:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:24.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:54:25 compute-1 sudo[122261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgkqaztsdxgmqyxuuhcdzfnbncdioglh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090064.7739928-249-181289421500216/AnsiballZ_file.py'
Oct 10 09:54:25 compute-1 sudo[122261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:25 compute-1 python3.9[122263]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:25 compute-1 sudo[122261]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:25 compute-1 ceph-mon[79167]: pgmap v246: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Oct 10 09:54:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:54:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:26.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:54:26 compute-1 sudo[122414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkwqrggsggxfpoychvcjgmtxugmhntyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090065.6205077-273-176412611412650/AnsiballZ_stat.py'
Oct 10 09:54:26 compute-1 sudo[122414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:26 compute-1 python3.9[122416]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:54:26 compute-1 sudo[122414]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:54:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:26.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:54:26 compute-1 sudo[122492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixntdrttiekfulxozaikrdnnvwcfmivm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090065.6205077-273-176412611412650/AnsiballZ_file.py'
Oct 10 09:54:26 compute-1 sudo[122492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:27 compute-1 python3.9[122494]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:27 compute-1 sudo[122492]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095427 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 09:54:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:54:27 compute-1 sudo[122645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixbgqohvicwqgcybhbpleqsbevuckllc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090067.2184887-309-42982694934512/AnsiballZ_stat.py'
Oct 10 09:54:27 compute-1 sudo[122645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:27 compute-1 python3.9[122647]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:54:27 compute-1 sudo[122645]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:27 compute-1 ceph-mon[79167]: pgmap v247: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Oct 10 09:54:28 compute-1 sudo[122723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phoeijbwmvpykzvqwqsflututnppzoyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090067.2184887-309-42982694934512/AnsiballZ_file.py'
Oct 10 09:54:28 compute-1 sudo[122723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:28.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:28 compute-1 python3.9[122725]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.60z2caf3 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:28 compute-1 sudo[122723]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:28.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:28 compute-1 sudo[122875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogwtkpocwwgkadvuzbordnvtnmvnxwkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090068.4823923-345-237571907821064/AnsiballZ_stat.py'
Oct 10 09:54:28 compute-1 sudo[122875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:29 compute-1 python3.9[122877]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:54:29 compute-1 sudo[122875]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:29 compute-1 sudo[122954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seubqilorxoyrhelnejahykdmubfofcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090068.4823923-345-237571907821064/AnsiballZ_file.py'
Oct 10 09:54:29 compute-1 sudo[122954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:29 compute-1 python3.9[122956]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:29 compute-1 sudo[122954]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:29 compute-1 ceph-mon[79167]: pgmap v248: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Oct 10 09:54:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:30.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:54:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:30.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:54:30 compute-1 sudo[123106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieioowwpdnkaodoamryuwlicfqerrhct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090069.8383338-384-201813147645512/AnsiballZ_command.py'
Oct 10 09:54:30 compute-1 sudo[123106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:30 compute-1 python3.9[123108]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:54:30 compute-1 sudo[123106]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:31 compute-1 ceph-mon[79167]: pgmap v249: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Oct 10 09:54:31 compute-1 sudo[123260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swazhhbvbyrnmdocypxgkmsodboowbjo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760090071.1388392-408-103853173492573/AnsiballZ_edpm_nftables_from_files.py'
Oct 10 09:54:31 compute-1 sudo[123260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:31 compute-1 python3[123262]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 10 09:54:31 compute-1 sudo[123260]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:31 compute-1 sudo[123263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:54:31 compute-1 sudo[123263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:54:31 compute-1 sudo[123263]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:32 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:54:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:32.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:54:32 compute-1 sudo[123437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maztmpygemfgphcazqfkkpckvfgrpugo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090072.0973797-432-219297481913025/AnsiballZ_stat.py'
Oct 10 09:54:32 compute-1 sudo[123437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:32.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:32 compute-1 python3.9[123439]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:54:32 compute-1 sudo[123437]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:32 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Scheduled restart job, restart counter is at 2.
Oct 10 09:54:32 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:54:32 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 2.051s CPU time.
Oct 10 09:54:32 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:54:33 compute-1 ceph-mon[79167]: pgmap v250: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:54:33 compute-1 sudo[123607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygoyhqmqtyiqjkfkylsndprkyektkiso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090072.0973797-432-219297481913025/AnsiballZ_copy.py'
Oct 10 09:54:33 compute-1 sudo[123607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:33 compute-1 podman[123606]: 2025-10-10 09:54:33.225500185 +0000 UTC m=+0.071504642 container create 1d91e1ba81e585d0aec0c6e45fab163a0133d926a7a3d20799b9560daa96fdc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:54:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131a6e8a675d298c48d9d3e69ce31f9b023c0f4c4ddce3fb844e9faf34c8deec/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 09:54:33 compute-1 podman[123606]: 2025-10-10 09:54:33.19619715 +0000 UTC m=+0.042201627 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:54:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131a6e8a675d298c48d9d3e69ce31f9b023c0f4c4ddce3fb844e9faf34c8deec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:54:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131a6e8a675d298c48d9d3e69ce31f9b023c0f4c4ddce3fb844e9faf34c8deec/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:54:33 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131a6e8a675d298c48d9d3e69ce31f9b023c0f4c4ddce3fb844e9faf34c8deec/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.mssvzx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:54:33 compute-1 podman[123606]: 2025-10-10 09:54:33.314298525 +0000 UTC m=+0.160303042 container init 1d91e1ba81e585d0aec0c6e45fab163a0133d926a7a3d20799b9560daa96fdc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 09:54:33 compute-1 podman[123606]: 2025-10-10 09:54:33.327752229 +0000 UTC m=+0.173756696 container start 1d91e1ba81e585d0aec0c6e45fab163a0133d926a7a3d20799b9560daa96fdc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 09:54:33 compute-1 bash[123606]: 1d91e1ba81e585d0aec0c6e45fab163a0133d926a7a3d20799b9560daa96fdc7
Oct 10 09:54:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:33 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 09:54:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:33 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 09:54:33 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:54:33 compute-1 python3.9[123610]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090072.0973797-432-219297481913025/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:33 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 09:54:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:33 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 09:54:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:33 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 09:54:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:33 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 09:54:33 compute-1 sudo[123607]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:33 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 09:54:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:33 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 09:54:34 compute-1 sudo[123817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgeunbobbnpcgyvnxqbmwakqpbbkywlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090073.6178687-477-124230480474413/AnsiballZ_stat.py'
Oct 10 09:54:34 compute-1 sudo[123817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:34 compute-1 python3.9[123819]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:54:34 compute-1 sudo[123817]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:54:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:34.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:54:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:34.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:34 compute-1 sudo[123942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsmilvnpteuglikwebbhhfjkligwbppn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090073.6178687-477-124230480474413/AnsiballZ_copy.py'
Oct 10 09:54:34 compute-1 sudo[123942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:35 compute-1 python3.9[123944]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090073.6178687-477-124230480474413/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:35 compute-1 sudo[123942]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:35 compute-1 ceph-mon[79167]: pgmap v251: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:54:35 compute-1 sudo[124095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqceqeiasfgtfkkfgjavoabkyfloerff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090075.270339-522-219546861891349/AnsiballZ_stat.py'
Oct 10 09:54:35 compute-1 sudo[124095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:35 compute-1 python3.9[124097]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:54:35 compute-1 sudo[124095]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:36.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:36 compute-1 sudo[124220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcgzvdxkrarmthagqthvsojvwileympg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090075.270339-522-219546861891349/AnsiballZ_copy.py'
Oct 10 09:54:36 compute-1 sudo[124220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:36 compute-1 python3.9[124222]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090075.270339-522-219546861891349/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:36 compute-1 sudo[124220]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:36.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:37 compute-1 sudo[124372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfdjlekgdkkehodmlkfezppxsbotnrim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090076.79518-567-118076608929673/AnsiballZ_stat.py'
Oct 10 09:54:37 compute-1 sudo[124372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:54:37 compute-1 python3.9[124374]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:54:37 compute-1 ceph-mon[79167]: pgmap v252: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:54:37 compute-1 sudo[124372]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:37 compute-1 sudo[124498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gddnltmcfgjisscrigpswjkvqmyvfqoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090076.79518-567-118076608929673/AnsiballZ_copy.py'
Oct 10 09:54:37 compute-1 sudo[124498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:38 compute-1 python3.9[124500]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090076.79518-567-118076608929673/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:38 compute-1 sudo[124498]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:54:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:38.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:54:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:38.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:39 compute-1 sudo[124650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akagcecprmkgrprjqtprmcmfpzlhfiug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090078.3927078-612-258648593591799/AnsiballZ_stat.py'
Oct 10 09:54:39 compute-1 sudo[124650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:39 compute-1 python3.9[124652]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:54:39 compute-1 sudo[124650]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:39 compute-1 ceph-mon[79167]: pgmap v253: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:54:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:39 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Oct 10 09:54:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:39 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Oct 10 09:54:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:39 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 09:54:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:39 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 09:54:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:39 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 09:54:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:39 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 09:54:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:39 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 09:54:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:39 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 09:54:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:39 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 09:54:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:39 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 09:54:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:39 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 09:54:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:39 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 09:54:39 compute-1 sudo[124776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idyloyzxasqtopdeewdehfanbfpopmlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090078.3927078-612-258648593591799/AnsiballZ_copy.py'
Oct 10 09:54:39 compute-1 sudo[124776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:39 compute-1 python3.9[124778]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090078.3927078-612-258648593591799/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:40 compute-1 sudo[124776]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:40.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:40 compute-1 sudo[124928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emkachmuqubkrmpdigeccnxlxrbdtmro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090080.2264438-657-54371730084308/AnsiballZ_file.py'
Oct 10 09:54:40 compute-1 sudo[124928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:40.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:40 compute-1 python3.9[124930]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:40 compute-1 sudo[124928]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:41 compute-1 sudo[125081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzjxkplvagqjeydjyfpnndofhnwshbuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090081.0102491-681-51595437093850/AnsiballZ_command.py'
Oct 10 09:54:41 compute-1 sudo[125081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:41 compute-1 ceph-mon[79167]: pgmap v254: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 597 B/s wr, 2 op/s
Oct 10 09:54:41 compute-1 python3.9[125083]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:54:41 compute-1 sudo[125081]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095441 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 09:54:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:54:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:54:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:42.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:54:42 compute-1 sudo[125236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acgomhkaitwmekrmepkdsjduauhujpdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090081.8658166-705-253301701355054/AnsiballZ_blockinfile.py'
Oct 10 09:54:42 compute-1 sudo[125236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:54:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:42.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:54:42 compute-1 python3.9[125238]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:42 compute-1 sudo[125236]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:43 compute-1 sudo[125389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moearzdytlxxsatqzourgkfjmpmdgvmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090082.927428-732-230743634583373/AnsiballZ_command.py'
Oct 10 09:54:43 compute-1 sudo[125389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:43 compute-1 ceph-mon[79167]: pgmap v255: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Oct 10 09:54:43 compute-1 python3.9[125391]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:54:43 compute-1 sudo[125389]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:44 compute-1 sudo[125542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbuefhzkapclfvexhsvqahbdegwqvvxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090083.807099-756-198689080222383/AnsiballZ_stat.py'
Oct 10 09:54:44 compute-1 sudo[125542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:44.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:44 compute-1 python3.9[125544]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:54:44 compute-1 sudo[125542]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:44.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:44 compute-1 sudo[125696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iccrvjrmffhfhhsrlknnposponlrlarb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090084.6249735-780-223635069688420/AnsiballZ_command.py'
Oct 10 09:54:44 compute-1 sudo[125696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:45 compute-1 python3.9[125698]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:54:45 compute-1 sudo[125696]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:45 compute-1 ceph-mon[79167]: pgmap v256: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Oct 10 09:54:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-000000000000000b:nfs.cephfs.0: -2
Oct 10 09:54:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 09:54:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 09:54:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 09:54:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 09:54:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 09:54:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 09:54:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 09:54:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 09:54:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 09:54:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 09:54:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 09:54:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 09:54:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 09:54:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 09:54:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 09:54:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 09:54:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 09:54:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 09:54:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 09:54:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 09:54:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 09:54:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 09:54:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 09:54:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 09:54:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 09:54:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 09:54:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 09:54:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e8000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:45 compute-1 sudo[125863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hugpizjryymttqcnuzaxzxdttjweiwni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090085.4071872-804-110183412283437/AnsiballZ_file.py'
Oct 10 09:54:45 compute-1 sudo[125863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:45 compute-1 python3.9[125869]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:45 compute-1 sudo[125863]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:46.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:46 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:54:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:54:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:46.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:54:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:47 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:54:47 compute-1 python3.9[126019]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:54:47 compute-1 ceph-mon[79167]: pgmap v257: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Oct 10 09:54:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:47 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e8000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:47 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:48.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:48 compute-1 sudo[126171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtqsmaowbvugnuqtpqceksmleigiennc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090087.9375606-924-84283204200009/AnsiballZ_command.py'
Oct 10 09:54:48 compute-1 sudo[126171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:48 compute-1 python3.9[126173]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:c0:16:5a:16" external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:54:48 compute-1 ovs-vsctl[126174]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-1.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:c0:16:5a:16 external_ids:ovn-encap-ip=172.19.0.101 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct 10 09:54:48 compute-1 sudo[126171]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:48.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:49 compute-1 sudo[126324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vacqbnueycctnqqvltynpjagcqjxgexi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090088.815551-951-75235484006593/AnsiballZ_command.py'
Oct 10 09:54:49 compute-1 sudo[126324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095449 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 09:54:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:49 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:49 compute-1 python3.9[126326]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:54:49 compute-1 sudo[126324]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:49 compute-1 ceph-mon[79167]: pgmap v258: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Oct 10 09:54:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:49 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:49 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e8000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:49 compute-1 sudo[126480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-embfsekkcmhwigqqwxifhdkvehvcikbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090089.6579285-975-56954577989892/AnsiballZ_command.py'
Oct 10 09:54:49 compute-1 sudo[126480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:50 compute-1 python3.9[126482]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:54:50 compute-1 ovs-vsctl[126483]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct 10 09:54:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:50.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:50 compute-1 sudo[126480]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:54:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:50.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:54:51 compute-1 python3.9[126633]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:54:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:51 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:51 compute-1 ceph-mon[79167]: pgmap v259: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Oct 10 09:54:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:51 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:51 compute-1 sudo[126786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfwwktwkzdfxdazsddahbkpcbenjplxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090091.3693936-1026-123840911671559/AnsiballZ_file.py'
Oct 10 09:54:51 compute-1 sudo[126786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:51 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:51 compute-1 python3.9[126788]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:54:51 compute-1 sudo[126786]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:52 compute-1 sudo[126789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:54:52 compute-1 sudo[126789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:54:52 compute-1 sudo[126789]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:54:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:52.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:52 compute-1 sudo[126963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agmzkomulvuvimjyytizhiqajewgpdto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090092.2117782-1050-61406272148826/AnsiballZ_stat.py'
Oct 10 09:54:52 compute-1 sudo[126963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:52.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:52 compute-1 python3.9[126965]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:54:52 compute-1 sudo[126963]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:53 compute-1 sudo[127041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecyqlgppvjficedvjowvbcwiktadasjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090092.2117782-1050-61406272148826/AnsiballZ_file.py'
Oct 10 09:54:53 compute-1 sudo[127041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:53 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e8000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:53 compute-1 python3.9[127043]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:54:53 compute-1 sudo[127041]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:53 compute-1 ceph-mon[79167]: pgmap v260: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Oct 10 09:54:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:53 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:53 compute-1 sudo[127194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaygapugwexrubkdyhbcpsclellfpzom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090093.4634109-1050-254599839977613/AnsiballZ_stat.py'
Oct 10 09:54:53 compute-1 sudo[127194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:53 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:53 compute-1 python3.9[127196]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:54:54 compute-1 sudo[127194]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:54.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:54 compute-1 sudo[127272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slgsqlbozlhhbfgeusnlrnplatijwsld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090093.4634109-1050-254599839977613/AnsiballZ_file.py'
Oct 10 09:54:54 compute-1 sudo[127272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:54 compute-1 python3.9[127274]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:54:54 compute-1 sudo[127272]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:54.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:55 compute-1 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 09:54:55 compute-1 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 8302 writes, 34K keys, 8302 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 8302 writes, 1698 syncs, 4.89 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8302 writes, 34K keys, 8302 commit groups, 1.0 writes per commit group, ingest: 21.40 MB, 0.04 MB/s
                                           Interval WAL: 8302 writes, 1698 syncs, 4.89 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 10 09:54:55 compute-1 sudo[127424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mefxhucxgwedbumexcqlntionogxbbam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090094.7054176-1119-236689311990186/AnsiballZ_file.py'
Oct 10 09:54:55 compute-1 sudo[127424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:55 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:55 compute-1 python3.9[127426]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:55 compute-1 sudo[127424]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:55 compute-1 ceph-mon[79167]: pgmap v261: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 10 09:54:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:55 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:55 compute-1 sudo[127577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umaxdhbqjirqtrsaofcwwxiiwiuphwun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090095.4932954-1143-166097353639264/AnsiballZ_stat.py'
Oct 10 09:54:55 compute-1 sudo[127577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:55 compute-1 python3.9[127579]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:54:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:55 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:55 compute-1 sudo[127577]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:56 compute-1 sudo[127655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbfkhrpknnwvnmtvjdvygevsjqvxgliq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090095.4932954-1143-166097353639264/AnsiballZ_file.py'
Oct 10 09:54:56 compute-1 sudo[127655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 09:54:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:56.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 09:54:56 compute-1 python3.9[127657]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:56 compute-1 sudo[127655]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:56.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:56 compute-1 sudo[127807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmokebeuiklqenvpurmbtwltxisvashy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090096.661195-1179-92463967427150/AnsiballZ_stat.py'
Oct 10 09:54:56 compute-1 sudo[127807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:57 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:57 compute-1 python3.9[127809]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:54:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:54:57 compute-1 sudo[127807]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:57 compute-1 sudo[127886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffsrxdymlpyizyeqrzxcszqzgebtpqbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090096.661195-1179-92463967427150/AnsiballZ_file.py'
Oct 10 09:54:57 compute-1 sudo[127886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:57 compute-1 ceph-mon[79167]: pgmap v262: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 10 09:54:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:57 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:57 compute-1 python3.9[127888]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:54:57 compute-1 sudo[127886]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:57 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:54:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:54:58.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:54:58 compute-1 sudo[128038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnyqzropgnurjcfxsoevualhchvsgjkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090097.9762814-1215-278778099206498/AnsiballZ_systemd.py'
Oct 10 09:54:58 compute-1 sudo[128038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:54:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:54:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:54:58.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:54:58 compute-1 python3.9[128040]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:54:58 compute-1 systemd[1]: Reloading.
Oct 10 09:54:58 compute-1 systemd-rc-local-generator[128070]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:54:58 compute-1 systemd-sysv-generator[128074]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:54:59 compute-1 sudo[128038]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:59 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:59 compute-1 sudo[128229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyvlzqdljqvmrsgoqbuukyqkucqdrxhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090099.252112-1239-184655024672422/AnsiballZ_stat.py'
Oct 10 09:54:59 compute-1 sudo[128229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:54:59 compute-1 ceph-mon[79167]: pgmap v263: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s
Oct 10 09:54:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:59 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e8009330 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:54:59 compute-1 python3.9[128231]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:54:59 compute-1 sudo[128229]: pam_unix(sudo:session): session closed for user root
Oct 10 09:54:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:54:59 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:00 compute-1 sudo[128307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wspeqcksxpcekkjonnbwvzqwhikdivum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090099.252112-1239-184655024672422/AnsiballZ_file.py'
Oct 10 09:55:00 compute-1 sudo[128307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:00.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:00 compute-1 python3.9[128309]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:55:00 compute-1 sudo[128307]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 09:55:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:00.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 09:55:00 compute-1 sudo[128459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knpiljqjaxublaikrodgkemnnugtylbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090100.6287916-1275-212985474272305/AnsiballZ_stat.py'
Oct 10 09:55:00 compute-1 sudo[128459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:01 compute-1 python3.9[128461]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:55:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:01 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:01 compute-1 sudo[128459]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:01 compute-1 sudo[128538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbmfugypprvmipxghohogsbjjgvrhzqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090100.6287916-1275-212985474272305/AnsiballZ_file.py'
Oct 10 09:55:01 compute-1 sudo[128538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:01 compute-1 python3.9[128540]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:55:01 compute-1 sudo[128538]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:01 compute-1 ceph-mon[79167]: pgmap v264: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:01 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:55:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:01 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:01 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e8009c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:02 compute-1 sudo[128690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkxegawjrwygixfuspshuyjdqirlprtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090101.866752-1311-50423661077281/AnsiballZ_systemd.py'
Oct 10 09:55:02 compute-1 sudo[128690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:55:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:55:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:02.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:55:02 compute-1 python3.9[128692]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:55:02 compute-1 systemd[1]: Reloading.
Oct 10 09:55:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:02.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:02 compute-1 systemd-rc-local-generator[128720]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:55:02 compute-1 systemd-sysv-generator[128724]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:55:02 compute-1 systemd[1]: Starting Create netns directory...
Oct 10 09:55:02 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 10 09:55:02 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 10 09:55:02 compute-1 systemd[1]: Finished Create netns directory.
Oct 10 09:55:02 compute-1 sudo[128690]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:03 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c00032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:03 compute-1 sudo[128886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlxwqfkaskdehljiudejryvjyugrcxhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090103.2743137-1341-91374018015848/AnsiballZ_file.py'
Oct 10 09:55:03 compute-1 sudo[128886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:03 compute-1 ceph-mon[79167]: pgmap v265: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:03 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c4002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:03 compute-1 python3.9[128888]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:55:03 compute-1 sudo[128886]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:03 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:55:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:04.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:55:04 compute-1 sudo[129038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riijuzfgvlefmsaadryqhoubkxtqylvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090104.065703-1365-41433418606351/AnsiballZ_stat.py'
Oct 10 09:55:04 compute-1 sudo[129038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:55:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:04.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:55:04 compute-1 python3.9[129040]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:55:04 compute-1 sudo[129038]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:05 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e8009c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:05 compute-1 sudo[129161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfjsdsvdguqdircoahsazqfnlgrqsxhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090104.065703-1365-41433418606351/AnsiballZ_copy.py'
Oct 10 09:55:05 compute-1 sudo[129161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:05 compute-1 python3.9[129164]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090104.065703-1365-41433418606351/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:55:05 compute-1 sudo[129161]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:05 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c00032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:05 compute-1 ceph-mon[79167]: pgmap v266: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:05 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c40039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:06.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:06 compute-1 sudo[129314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzuuobrbzdsceinrhnissxqkigpyerys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090105.9175522-1416-116243415161999/AnsiballZ_file.py'
Oct 10 09:55:06 compute-1 sudo[129314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:06 compute-1 python3.9[129316]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:55:06 compute-1 sudo[129314]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:06.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:07 compute-1 sudo[129466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcdkgfwhaxqdjjbdavtwamqmwwqqfmmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090106.7524204-1440-76328038016746/AnsiballZ_stat.py'
Oct 10 09:55:07 compute-1 sudo[129466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:07 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:55:07 compute-1 python3.9[129468]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:55:07 compute-1 sudo[129466]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:07 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:07 compute-1 ceph-mon[79167]: pgmap v267: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:07 compute-1 sudo[129590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ongixyyrhutgaczthfgjxbzievuhigks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090106.7524204-1440-76328038016746/AnsiballZ_copy.py'
Oct 10 09:55:07 compute-1 sudo[129590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:07 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:08 compute-1 python3.9[129592]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090106.7524204-1440-76328038016746/.source.json _original_basename=.f_hkh9hg follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:55:08 compute-1 sudo[129590]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:08.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:08 compute-1 sudo[129742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baxpgfasbdlkqrfyuslriwfjvtwcaiay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090108.2555752-1485-147790069260654/AnsiballZ_file.py'
Oct 10 09:55:08 compute-1 sudo[129742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 09:55:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:08.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 09:55:08 compute-1 python3.9[129744]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:55:08 compute-1 sudo[129742]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:09 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:09 compute-1 sudo[129895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udaiboxgcgwfrteewsiqkjgcsssyxoiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090109.1466684-1509-142015521692288/AnsiballZ_stat.py'
Oct 10 09:55:09 compute-1 sudo[129895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:09 compute-1 sudo[129895]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:09 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e8009c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:09 compute-1 ceph-mon[79167]: pgmap v268: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:55:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:09 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:10 compute-1 sudo[130018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmumlbqfzhnosagwtujraduresunmzkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090109.1466684-1509-142015521692288/AnsiballZ_copy.py'
Oct 10 09:55:10 compute-1 sudo[130018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 09:55:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:10.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 09:55:10 compute-1 sudo[130018]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:55:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:10.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:55:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:11 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:11 compute-1 sudo[130170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihkjjgmbvfefqxrihjvlbnbpnwwwjsdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090110.7608335-1560-101554597075757/AnsiballZ_container_config_data.py'
Oct 10 09:55:11 compute-1 sudo[130170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:11 compute-1 python3.9[130172]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct 10 09:55:11 compute-1 sudo[130170]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:11 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:11 compute-1 ceph-mon[79167]: pgmap v269: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:11 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e8009c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:12 compute-1 sudo[130262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:55:12 compute-1 sudo[130262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:55:12 compute-1 sudo[130262]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:12 compute-1 sudo[130348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjvxqxkkhlwdnhyavcrhqrmxsezehqbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090111.6797035-1587-161793388177046/AnsiballZ_container_config_hash.py'
Oct 10 09:55:12 compute-1 sudo[130348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:55:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:12.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:12 compute-1 python3.9[130350]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 10 09:55:12 compute-1 sudo[130348]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:12.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:13 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:13 compute-1 sudo[130501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cohyibtsodhvwfawjgkxfszjdcggweih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090112.7589278-1614-209682023265214/AnsiballZ_podman_container_info.py'
Oct 10 09:55:13 compute-1 sudo[130501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:13 compute-1 python3.9[130503]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 10 09:55:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:13 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:13 compute-1 sudo[130501]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:13 compute-1 ceph-mon[79167]: pgmap v270: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:13 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:14.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 09:55:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:14.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 09:55:15 compute-1 sudo[130680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oshvhjnslotuyhjyvbgirmfgahaavxeo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760090114.5194786-1653-15973167398452/AnsiballZ_edpm_container_manage.py'
Oct 10 09:55:15 compute-1 sudo[130680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:15 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e8009c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:15 compute-1 python3[130682]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 10 09:55:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:15 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:15 compute-1 ceph-mon[79167]: pgmap v271: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:15 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:16.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:55:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:16.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:55:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:55:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:17 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:55:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:17 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:17 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3cc000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:17 compute-1 ceph-mon[79167]: pgmap v272: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 09:55:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:18.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 09:55:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:18.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:19 compute-1 ceph-mon[79167]: pgmap v273: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:55:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:19 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:19 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c40039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:19 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:20 compute-1 sudo[130784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:55:20 compute-1 sudo[130784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:55:20 compute-1 sudo[130784]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:20.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:20 compute-1 sudo[130809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 09:55:20 compute-1 sudo[130809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:55:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:55:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:20.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:55:21 compute-1 podman[130698]: 2025-10-10 09:55:21.066253259 +0000 UTC m=+5.516159970 image pull 70c92fb64e1eda6ef063d34e60e9a541e44edbaa51e757e8304331202c76a3a7 quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct 10 09:55:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:21 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3b0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:21 compute-1 podman[130889]: 2025-10-10 09:55:21.209491355 +0000 UTC m=+0.049747041 container create bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 09:55:21 compute-1 podman[130889]: 2025-10-10 09:55:21.18460686 +0000 UTC m=+0.024862536 image pull 70c92fb64e1eda6ef063d34e60e9a541e44edbaa51e757e8304331202c76a3a7 quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct 10 09:55:21 compute-1 python3[130682]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct 10 09:55:21 compute-1 sudo[130809]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:21 compute-1 ceph-mon[79167]: pgmap v274: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:21 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:55:21 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:55:21 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:55:21 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:55:21 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:55:21 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:55:21 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:55:21 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:55:21 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:55:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:21 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:21 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c40039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:22 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 09:55:22 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2459 writes, 14K keys, 2459 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.06 MB/s
                                           Cumulative WAL: 2459 writes, 2459 syncs, 1.00 writes per sync, written: 0.04 GB, 0.06 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2459 writes, 14K keys, 2459 commit groups, 1.0 writes per commit group, ingest: 37.91 MB, 0.06 MB/s
                                           Interval WAL: 2459 writes, 2459 syncs, 1.00 writes per sync, written: 0.04 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    141.5      0.15              0.08         6    0.024       0      0       0.0       0.0
                                             L6      1/0   12.11 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   2.9    206.8    181.3      0.33              0.17         5    0.066     21K   2259       0.0       0.0
                                            Sum      1/0   12.11 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.9    143.8    169.2      0.48              0.25        11    0.043     21K   2259       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.9    144.6    170.1      0.48              0.25        10    0.048     21K   2259       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   0.0    206.8    181.3      0.33              0.17         5    0.066     21K   2259       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    143.9      0.14              0.08         5    0.029       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.020, interval 0.020
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.08 GB write, 0.13 MB/s write, 0.07 GB read, 0.11 MB/s read, 0.5 seconds
                                           Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.07 GB read, 0.11 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5625d3e63350#2 capacity: 304.00 MB usage: 2.59 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 7.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(170,2.40 MB,0.787956%) FilterBlock(11,69.05 KB,0.0221805%) IndexBlock(11,132.45 KB,0.0425489%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 10 09:55:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:55:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:22.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:22 compute-1 sudo[130680]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 09:55:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:22.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 09:55:22 compute-1 sudo[131092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztmviwuokzwwimzptuqpdtkuctyxomrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090122.572334-1677-199732454215659/AnsiballZ_stat.py'
Oct 10 09:55:22 compute-1 sudo[131092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:23 compute-1 python3.9[131094]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:55:23 compute-1 sudo[131092]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:23 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:23 compute-1 ceph-mon[79167]: pgmap v275: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:23 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3b00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:23 compute-1 sudo[131247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stiqtbnkhhtnumzpdihoexxfbjeipsmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090123.4583204-1704-191313108264840/AnsiballZ_file.py'
Oct 10 09:55:23 compute-1 sudo[131247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:23 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:24 compute-1 python3.9[131249]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:55:24 compute-1 sudo[131247]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:24.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:24 compute-1 sudo[131323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcmskftrnzsddraqfgqfwuameckmwohp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090123.4583204-1704-191313108264840/AnsiballZ_stat.py'
Oct 10 09:55:24 compute-1 sudo[131323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:55:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:24.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:55:24 compute-1 python3.9[131325]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:55:24 compute-1 sudo[131323]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:25 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c40039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:25 compute-1 sudo[131474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgetvhhdyxhmjgppdcfehmjolhmcnayt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090124.7619548-1704-75025744652160/AnsiballZ_copy.py'
Oct 10 09:55:25 compute-1 sudo[131474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:25 compute-1 python3.9[131476]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760090124.7619548-1704-75025744652160/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:55:25 compute-1 sudo[131474]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:25 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:25 compute-1 ceph-mon[79167]: pgmap v276: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:25 compute-1 sudo[131551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvsyiothyalmveszsdymkizthtdfdbui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090124.7619548-1704-75025744652160/AnsiballZ_systemd.py'
Oct 10 09:55:25 compute-1 sudo[131551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:25 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3b00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:26 compute-1 python3.9[131553]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 09:55:26 compute-1 systemd[1]: Reloading.
Oct 10 09:55:26 compute-1 systemd-rc-local-generator[131581]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:55:26 compute-1 systemd-sysv-generator[131586]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:55:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 09:55:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:26.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 09:55:26 compute-1 sudo[131551]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:26.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:26 compute-1 sudo[131613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 09:55:26 compute-1 sudo[131613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:55:26 compute-1 sudo[131613]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:26 compute-1 sudo[131688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhsqsvcljwtumcttmilhyvdufsjjurgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090124.7619548-1704-75025744652160/AnsiballZ_systemd.py'
Oct 10 09:55:26 compute-1 sudo[131688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:27 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:27 compute-1 python3.9[131690]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:55:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:55:27 compute-1 systemd[1]: Reloading.
Oct 10 09:55:27 compute-1 systemd-sysv-generator[131725]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:55:27 compute-1 systemd-rc-local-generator[131721]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:55:27 compute-1 systemd[1]: Starting ovn_controller container...
Oct 10 09:55:27 compute-1 ceph-mon[79167]: pgmap v277: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:27 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:55:27 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:55:27 compute-1 systemd[1]: Started libcrun container.
Oct 10 09:55:27 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c985298b94bb2ac08e4e80495a89deaa1110af6f6b90fce21a195bfa4aca6f9/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 10 09:55:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:27 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:27 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7.
Oct 10 09:55:27 compute-1 podman[131733]: 2025-10-10 09:55:27.804601942 +0000 UTC m=+0.191218710 container init bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller)
Oct 10 09:55:27 compute-1 ovn_controller[131749]: + sudo -E kolla_set_configs
Oct 10 09:55:27 compute-1 podman[131733]: 2025-10-10 09:55:27.845993525 +0000 UTC m=+0.232610243 container start bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller)
Oct 10 09:55:27 compute-1 edpm-start-podman-container[131733]: ovn_controller
Oct 10 09:55:27 compute-1 systemd[1]: Created slice User Slice of UID 0.
Oct 10 09:55:27 compute-1 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 10 09:55:27 compute-1 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 10 09:55:27 compute-1 systemd[1]: Starting User Manager for UID 0...
Oct 10 09:55:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:27 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c40039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:27 compute-1 systemd[131781]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Oct 10 09:55:27 compute-1 edpm-start-podman-container[131732]: Creating additional drop-in dependency for "ovn_controller" (bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7)
Oct 10 09:55:28 compute-1 podman[131756]: 2025-10-10 09:55:28.0042878 +0000 UTC m=+0.141456919 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 10 09:55:28 compute-1 systemd[1]: bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7-57f2736abf5f19a4.service: Main process exited, code=exited, status=1/FAILURE
Oct 10 09:55:28 compute-1 systemd[1]: bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7-57f2736abf5f19a4.service: Failed with result 'exit-code'.
Oct 10 09:55:28 compute-1 systemd[1]: Reloading.
Oct 10 09:55:28 compute-1 systemd-sysv-generator[131836]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:55:28 compute-1 systemd-rc-local-generator[131833]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:55:28 compute-1 systemd[131781]: Queued start job for default target Main User Target.
Oct 10 09:55:28 compute-1 systemd[131781]: Created slice User Application Slice.
Oct 10 09:55:28 compute-1 systemd[131781]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 10 09:55:28 compute-1 systemd[131781]: Started Daily Cleanup of User's Temporary Directories.
Oct 10 09:55:28 compute-1 systemd[131781]: Reached target Paths.
Oct 10 09:55:28 compute-1 systemd[131781]: Reached target Timers.
Oct 10 09:55:28 compute-1 systemd[131781]: Starting D-Bus User Message Bus Socket...
Oct 10 09:55:28 compute-1 systemd[131781]: Starting Create User's Volatile Files and Directories...
Oct 10 09:55:28 compute-1 systemd[131781]: Listening on D-Bus User Message Bus Socket.
Oct 10 09:55:28 compute-1 systemd[131781]: Reached target Sockets.
Oct 10 09:55:28 compute-1 systemd[131781]: Finished Create User's Volatile Files and Directories.
Oct 10 09:55:28 compute-1 systemd[131781]: Reached target Basic System.
Oct 10 09:55:28 compute-1 systemd[131781]: Reached target Main User Target.
Oct 10 09:55:28 compute-1 systemd[131781]: Startup finished in 162ms.
Oct 10 09:55:28 compute-1 systemd[1]: Started User Manager for UID 0.
Oct 10 09:55:28 compute-1 systemd[1]: Started ovn_controller container.
Oct 10 09:55:28 compute-1 systemd[1]: Started Session c1 of User root.
Oct 10 09:55:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:55:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:28.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:55:28 compute-1 sudo[131688]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:28 compute-1 ovn_controller[131749]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 10 09:55:28 compute-1 ovn_controller[131749]: INFO:__main__:Validating config file
Oct 10 09:55:28 compute-1 ovn_controller[131749]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 10 09:55:28 compute-1 ovn_controller[131749]: INFO:__main__:Writing out command to execute
Oct 10 09:55:28 compute-1 systemd[1]: session-c1.scope: Deactivated successfully.
Oct 10 09:55:28 compute-1 ovn_controller[131749]: ++ cat /run_command
Oct 10 09:55:28 compute-1 ovn_controller[131749]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 10 09:55:28 compute-1 ovn_controller[131749]: + ARGS=
Oct 10 09:55:28 compute-1 ovn_controller[131749]: + sudo kolla_copy_cacerts
Oct 10 09:55:28 compute-1 systemd[1]: Started Session c2 of User root.
Oct 10 09:55:28 compute-1 systemd[1]: session-c2.scope: Deactivated successfully.
Oct 10 09:55:28 compute-1 ovn_controller[131749]: + [[ ! -n '' ]]
Oct 10 09:55:28 compute-1 ovn_controller[131749]: + . kolla_extend_start
Oct 10 09:55:28 compute-1 ovn_controller[131749]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 10 09:55:28 compute-1 ovn_controller[131749]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Oct 10 09:55:28 compute-1 ovn_controller[131749]: + umask 0022
Oct 10 09:55:28 compute-1 ovn_controller[131749]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Oct 10 09:55:28 compute-1 ovn_controller[131749]: 2025-10-10T09:55:28Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 10 09:55:28 compute-1 ovn_controller[131749]: 2025-10-10T09:55:28Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 10 09:55:28 compute-1 ovn_controller[131749]: 2025-10-10T09:55:28Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Oct 10 09:55:28 compute-1 ovn_controller[131749]: 2025-10-10T09:55:28Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct 10 09:55:28 compute-1 ovn_controller[131749]: 2025-10-10T09:55:28Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 10 09:55:28 compute-1 ovn_controller[131749]: 2025-10-10T09:55:28Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Oct 10 09:55:28 compute-1 NetworkManager[44982]: <info>  [1760090128.5394] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Oct 10 09:55:28 compute-1 NetworkManager[44982]: <info>  [1760090128.5408] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 09:55:28 compute-1 NetworkManager[44982]: <info>  [1760090128.5433] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Oct 10 09:55:28 compute-1 NetworkManager[44982]: <info>  [1760090128.5447] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Oct 10 09:55:28 compute-1 NetworkManager[44982]: <info>  [1760090128.5456] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 10 09:55:28 compute-1 kernel: br-int: entered promiscuous mode
Oct 10 09:55:28 compute-1 ovn_controller[131749]: 2025-10-10T09:55:28Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 10 09:55:28 compute-1 ovn_controller[131749]: 2025-10-10T09:55:28Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 10 09:55:28 compute-1 ovn_controller[131749]: 2025-10-10T09:55:28Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 10 09:55:28 compute-1 ovn_controller[131749]: 2025-10-10T09:55:28Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct 10 09:55:28 compute-1 ovn_controller[131749]: 2025-10-10T09:55:28Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Oct 10 09:55:28 compute-1 ovn_controller[131749]: 2025-10-10T09:55:28Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Oct 10 09:55:28 compute-1 ovn_controller[131749]: 2025-10-10T09:55:28Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 10 09:55:28 compute-1 ovn_controller[131749]: 2025-10-10T09:55:28Z|00014|main|INFO|OVS feature set changed, force recompute.
Oct 10 09:55:28 compute-1 ovn_controller[131749]: 2025-10-10T09:55:28Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 10 09:55:28 compute-1 ovn_controller[131749]: 2025-10-10T09:55:28Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 10 09:55:28 compute-1 ovn_controller[131749]: 2025-10-10T09:55:28Z|00017|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 10 09:55:28 compute-1 ovn_controller[131749]: 2025-10-10T09:55:28Z|00018|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct 10 09:55:28 compute-1 ovn_controller[131749]: 2025-10-10T09:55:28Z|00019|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Oct 10 09:55:28 compute-1 ovn_controller[131749]: 2025-10-10T09:55:28Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 10 09:55:28 compute-1 ovn_controller[131749]: 2025-10-10T09:55:28Z|00021|main|INFO|OVS feature set changed, force recompute.
Oct 10 09:55:28 compute-1 ovn_controller[131749]: 2025-10-10T09:55:28Z|00022|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 10 09:55:28 compute-1 ovn_controller[131749]: 2025-10-10T09:55:28Z|00023|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct 10 09:55:28 compute-1 ovn_controller[131749]: 2025-10-10T09:55:28Z|00024|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct 10 09:55:28 compute-1 ovn_controller[131749]: 2025-10-10T09:55:28Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 10 09:55:28 compute-1 ovn_controller[131749]: 2025-10-10T09:55:28Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 10 09:55:28 compute-1 ovn_controller[131749]: 2025-10-10T09:55:28Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 10 09:55:28 compute-1 ovn_controller[131749]: 2025-10-10T09:55:28Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 10 09:55:28 compute-1 ovn_controller[131749]: 2025-10-10T09:55:28Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 10 09:55:28 compute-1 NetworkManager[44982]: <info>  [1760090128.5634] manager: (ovn-49146e-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Oct 10 09:55:28 compute-1 NetworkManager[44982]: <info>  [1760090128.5639] manager: (ovn-a1a60c-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Oct 10 09:55:28 compute-1 NetworkManager[44982]: <info>  [1760090128.5645] manager: (ovn-38ab03-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Oct 10 09:55:28 compute-1 ovn_controller[131749]: 2025-10-10T09:55:28Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 10 09:55:28 compute-1 kernel: genev_sys_6081: entered promiscuous mode
Oct 10 09:55:28 compute-1 NetworkManager[44982]: <info>  [1760090128.5918] device (genev_sys_6081): carrier: link connected
Oct 10 09:55:28 compute-1 NetworkManager[44982]: <info>  [1760090128.5921] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Oct 10 09:55:28 compute-1 systemd-udevd[131922]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 09:55:28 compute-1 systemd-udevd[131926]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 09:55:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:28.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:28 compute-1 sudo[132013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rapllcrmrcpjnjbxkspxuhylqoqxayvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090128.5469255-1788-147115309856678/AnsiballZ_command.py'
Oct 10 09:55:28 compute-1 sudo[132013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:29 compute-1 python3.9[132015]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:55:29 compute-1 ovs-vsctl[132016]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct 10 09:55:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:29 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3b00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:29 compute-1 sudo[132013]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:29 compute-1 ceph-mon[79167]: pgmap v278: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:55:29 compute-1 sudo[132167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icuqbkfetiaktvucptqjhbfylqhkthsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090129.4179027-1812-39200101064660/AnsiballZ_command.py'
Oct 10 09:55:29 compute-1 sudo[132167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:29 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:29 compute-1 python3.9[132169]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:55:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:29 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:29 compute-1 ovs-vsctl[132171]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct 10 09:55:30 compute-1 sudo[132167]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 09:55:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:30.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 09:55:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:55:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:30.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:55:30 compute-1 sudo[132322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nogewwokhbvildhvhmjkuiomnobvkvdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090130.5509121-1854-245584697044153/AnsiballZ_command.py'
Oct 10 09:55:30 compute-1 sudo[132322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:31 compute-1 python3.9[132324]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:55:31 compute-1 ovs-vsctl[132325]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct 10 09:55:31 compute-1 sudo[132322]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:31 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c40039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:31 compute-1 sshd-session[120570]: Connection closed by 192.168.122.30 port 54950
Oct 10 09:55:31 compute-1 sshd-session[120567]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:55:31 compute-1 systemd[1]: session-50.scope: Deactivated successfully.
Oct 10 09:55:31 compute-1 systemd[1]: session-50.scope: Consumed 1min 7.487s CPU time.
Oct 10 09:55:31 compute-1 systemd-logind[789]: Session 50 logged out. Waiting for processes to exit.
Oct 10 09:55:31 compute-1 systemd-logind[789]: Removed session 50.
Oct 10 09:55:31 compute-1 ceph-mon[79167]: pgmap v279: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:31 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:55:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:31 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3b0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:31 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:32 compute-1 sudo[132351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:55:32 compute-1 sudo[132351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:55:32 compute-1 sudo[132351]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:55:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:32.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:32.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:33 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:33 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c40039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:33 compute-1 ceph-mon[79167]: pgmap v280: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:33 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3b0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:34.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:34.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:35 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:35 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:35 compute-1 ceph-mon[79167]: pgmap v281: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:35 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c40039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:36.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:55:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:36.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:55:36 compute-1 sshd-session[132378]: Accepted publickey for zuul from 192.168.122.30 port 60002 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:55:36 compute-1 systemd-logind[789]: New session 52 of user zuul.
Oct 10 09:55:36 compute-1 systemd[1]: Started Session 52 of User zuul.
Oct 10 09:55:36 compute-1 sshd-session[132378]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:55:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:37 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3b0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:55:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:37 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:37 compute-1 ceph-mon[79167]: pgmap v282: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:37 compute-1 python3.9[132532]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:55:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:37 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:55:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:38.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:55:38 compute-1 systemd[1]: Stopping User Manager for UID 0...
Oct 10 09:55:38 compute-1 systemd[131781]: Activating special unit Exit the Session...
Oct 10 09:55:38 compute-1 systemd[131781]: Stopped target Main User Target.
Oct 10 09:55:38 compute-1 systemd[131781]: Stopped target Basic System.
Oct 10 09:55:38 compute-1 systemd[131781]: Stopped target Paths.
Oct 10 09:55:38 compute-1 systemd[131781]: Stopped target Sockets.
Oct 10 09:55:38 compute-1 systemd[131781]: Stopped target Timers.
Oct 10 09:55:38 compute-1 systemd[131781]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 10 09:55:38 compute-1 systemd[131781]: Closed D-Bus User Message Bus Socket.
Oct 10 09:55:38 compute-1 systemd[131781]: Stopped Create User's Volatile Files and Directories.
Oct 10 09:55:38 compute-1 systemd[131781]: Removed slice User Application Slice.
Oct 10 09:55:38 compute-1 systemd[131781]: Reached target Shutdown.
Oct 10 09:55:38 compute-1 systemd[131781]: Finished Exit the Session.
Oct 10 09:55:38 compute-1 systemd[131781]: Reached target Exit the Session.
Oct 10 09:55:38 compute-1 systemd[1]: user@0.service: Deactivated successfully.
Oct 10 09:55:38 compute-1 systemd[1]: Stopped User Manager for UID 0.
Oct 10 09:55:38 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 10 09:55:38 compute-1 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 10 09:55:38 compute-1 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 10 09:55:38 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 10 09:55:38 compute-1 systemd[1]: Removed slice User Slice of UID 0.
Oct 10 09:55:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:38.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:38 compute-1 sudo[132688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exfvrvwofgljyqovpkwsjlgzhxnoenky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090138.457438-63-225486855706518/AnsiballZ_file.py'
Oct 10 09:55:38 compute-1 sudo[132688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:39 compute-1 python3.9[132690]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:55:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:39 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:39 compute-1 sudo[132688]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:39 compute-1 sudo[132841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfjuluzyujhfkdwhxuycanyorofbymju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090139.3945453-63-79659198187289/AnsiballZ_file.py'
Oct 10 09:55:39 compute-1 sudo[132841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:39 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3b0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:39 compute-1 ceph-mon[79167]: pgmap v283: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:55:39 compute-1 python3.9[132843]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:55:39 compute-1 sudo[132841]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:39 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:40.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:40 compute-1 sudo[132993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udnmlppezgpxskdnqaimqcvgiytzdlyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090140.129246-63-162358600582225/AnsiballZ_file.py'
Oct 10 09:55:40 compute-1 sudo[132993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:55:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:40.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:55:40 compute-1 python3.9[132995]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:55:40 compute-1 sudo[132993]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:41 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:41 compute-1 sudo[133145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yngmrfxhlnwohsihtnhgqjqmwibzczvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090140.8977878-63-53801566221412/AnsiballZ_file.py'
Oct 10 09:55:41 compute-1 sudo[133145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:41 compute-1 python3.9[133147]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:55:41 compute-1 sudo[133145]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:41 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:41 compute-1 ceph-mon[79167]: pgmap v284: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:41 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3b0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:55:42 compute-1 sudo[133298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceybcniynjyfbcrmmoklasgclzclvdac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090141.698287-63-205131090200523/AnsiballZ_file.py'
Oct 10 09:55:42 compute-1 sudo[133298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:42.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:42 compute-1 python3.9[133300]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:55:42 compute-1 sudo[133298]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:55:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:42.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:55:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:43 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:43 compute-1 python3.9[133450]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:55:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:43 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:43 compute-1 ceph-mon[79167]: pgmap v285: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:43 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:44 compute-1 sudo[133601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jimyvdgsfslkoqvvaglycgjpggzsxduu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090143.741164-195-77207017487923/AnsiballZ_seboolean.py'
Oct 10 09:55:44 compute-1 sudo[133601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:44.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:44 compute-1 python3.9[133603]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 10 09:55:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:44.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:45 compute-1 sudo[133601]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3b0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:46 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:45 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:46 compute-1 ceph-mon[79167]: pgmap v286: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:46 compute-1 python3.9[133754]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:55:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:55:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:46.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:55:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:46.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:47 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:55:47 compute-1 ceph-mon[79167]: pgmap v287: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:47 compute-1 python3.9[133876]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090145.417774-219-211815174147707/.source follow=False _original_basename=haproxy.j2 checksum=4bca74f6ee0b6450624d22997e2f90c414d58b44 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:55:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:47 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:55:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:47 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3b0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:47 compute-1 python3.9[134027]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:55:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:48 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:55:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:48.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:55:48 compute-1 python3.9[134148]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090147.3173375-264-139570385810756/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:55:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:48.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:49 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c40039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:49 compute-1 sudo[134299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nomqzjhrcigtwpgouhqrpluivldxxviy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090149.1032043-315-244872500342504/AnsiballZ_setup.py'
Oct 10 09:55:49 compute-1 sudo[134299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:49 compute-1 ceph-mon[79167]: pgmap v288: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:55:49 compute-1 python3.9[134301]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:55:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:49 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3dc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:49 compute-1 sudo[134299]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:50 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3b0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:50.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:50 compute-1 sudo[134383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmohrwhbqlszvehfecrdrlzogrndcdmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090149.1032043-315-244872500342504/AnsiballZ_dnf.py'
Oct 10 09:55:50 compute-1 sudo[134383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:50 compute-1 python3.9[134385]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:55:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:55:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:50.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:55:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:51 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:51 compute-1 ceph-mon[79167]: pgmap v289: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:51 compute-1 sudo[134383]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:51 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c40039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:52 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:52 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e8001320 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:55:52 compute-1 sudo[134438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:55:52 compute-1 sudo[134438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:55:52 compute-1 sudo[134438]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:55:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:52.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:55:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:55:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:52.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:55:52 compute-1 sudo[134564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovwldksutvrfmudmgstnopizvamjjaaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090152.2241793-351-164594783877710/AnsiballZ_systemd.py'
Oct 10 09:55:52 compute-1 sudo[134564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:55:53 compute-1 python3.9[134566]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 09:55:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:53 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3b0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:53 compute-1 sudo[134564]: pam_unix(sudo:session): session closed for user root
Oct 10 09:55:53 compute-1 ceph-mon[79167]: pgmap v290: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:53 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3cc0011f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:54 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:54 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3c40039c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:55:54 compute-1 python3.9[134720]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:55:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 09:55:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:54.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 09:55:54 compute-1 python3.9[134841]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090153.5146852-375-40591609089866/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:55:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 09:55:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:54.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 09:55:55 compute-1 kernel: ganesha.nfsd[134388]: segfault at 50 ip 00007fb497e7332e sp 00007fb461ffa210 error 4 in libntirpc.so.5.8[7fb497e58000+2c000] likely on CPU 2 (core 0, socket 2)
Oct 10 09:55:55 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 09:55:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[123625]: 10/10/2025 09:55:55 : epoch 68e8d7d9 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb3e8001320 fd 38 proxy ignored for local
Oct 10 09:55:55 compute-1 systemd[1]: Started Process Core Dump (PID 134986/UID 0).
Oct 10 09:55:55 compute-1 python3.9[134992]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:55:55 compute-1 ceph-mon[79167]: pgmap v291: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:56 compute-1 python3.9[135115]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090154.8828712-375-250302573173699/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:55:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:56.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:56 compute-1 systemd-coredump[134993]: Process 123630 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 59:
                                                    #0  0x00007fb497e7332e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 10 09:55:56 compute-1 systemd[1]: systemd-coredump@2-134986-0.service: Deactivated successfully.
Oct 10 09:55:56 compute-1 systemd[1]: systemd-coredump@2-134986-0.service: Consumed 1.258s CPU time.
Oct 10 09:55:56 compute-1 podman[135144]: 2025-10-10 09:55:56.652722644 +0000 UTC m=+0.047814201 container died 1d91e1ba81e585d0aec0c6e45fab163a0133d926a7a3d20799b9560daa96fdc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 09:55:56 compute-1 systemd[1]: var-lib-containers-storage-overlay-131a6e8a675d298c48d9d3e69ce31f9b023c0f4c4ddce3fb844e9faf34c8deec-merged.mount: Deactivated successfully.
Oct 10 09:55:56 compute-1 podman[135144]: 2025-10-10 09:55:56.698052926 +0000 UTC m=+0.093144453 container remove 1d91e1ba81e585d0aec0c6e45fab163a0133d926a7a3d20799b9560daa96fdc7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 09:55:56 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Main process exited, code=exited, status=139/n/a
Oct 10 09:55:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:56.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:56 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Failed with result 'exit-code'.
Oct 10 09:55:56 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 1.918s CPU time.
Oct 10 09:55:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:55:57 compute-1 python3.9[135314]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:55:57 compute-1 ceph-mon[79167]: pgmap v292: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:55:58 compute-1 python3.9[135436]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090156.8858407-507-240203052217470/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:55:58 compute-1 ovn_controller[131749]: 2025-10-10T09:55:58Z|00025|memory|INFO|16256 kB peak resident set size after 29.7 seconds
Oct 10 09:55:58 compute-1 ovn_controller[131749]: 2025-10-10T09:55:58Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Oct 10 09:55:58 compute-1 podman[135437]: 2025-10-10 09:55:58.233561551 +0000 UTC m=+0.148160209 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 10 09:55:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:55:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:55:58.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:55:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:55:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:55:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:55:58.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:55:58 compute-1 python3.9[135614]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:55:59 compute-1 python3.9[135735]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090158.2549388-507-264385398988774/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:55:59 compute-1 ceph-mon[79167]: pgmap v293: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:56:00 compute-1 python3.9[135886]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:56:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:00.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:56:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:00.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:56:00 compute-1 sudo[136038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txkenurewcgsmfwtaetdvorkmpipxxzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090160.5978928-621-188985726053537/AnsiballZ_file.py'
Oct 10 09:56:00 compute-1 sudo[136038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:01 compute-1 python3.9[136040]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:56:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095601 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 09:56:01 compute-1 sudo[136038]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:01 compute-1 ceph-mon[79167]: pgmap v294: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:56:01 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:56:01 compute-1 sudo[136191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsvdbbsxqsamgfkycwwgqnveeqvtczvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090161.4267285-645-27728346670518/AnsiballZ_stat.py'
Oct 10 09:56:01 compute-1 sudo[136191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:01 compute-1 python3.9[136193]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:56:02 compute-1 sudo[136191]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:56:02 compute-1 sudo[136269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bowmltyqiojhhcbxhsmkixroiyvnykjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090161.4267285-645-27728346670518/AnsiballZ_file.py'
Oct 10 09:56:02 compute-1 sudo[136269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:02.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:02 compute-1 python3.9[136271]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:56:02 compute-1 sudo[136269]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:56:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:02.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:56:03 compute-1 sudo[136421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsgmrbpctwwgkiuekbajmgddobrfucbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090162.7007666-645-106099547632763/AnsiballZ_stat.py'
Oct 10 09:56:03 compute-1 sudo[136421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:03 compute-1 python3.9[136423]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:56:03 compute-1 sudo[136421]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:03 compute-1 sudo[136500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfuyqgkjzrneuccesaoxurpnrflwvewp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090162.7007666-645-106099547632763/AnsiballZ_file.py'
Oct 10 09:56:03 compute-1 sudo[136500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:03 compute-1 ceph-mon[79167]: pgmap v295: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:56:03 compute-1 python3.9[136502]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:56:03 compute-1 sudo[136500]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:04.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:04 compute-1 sudo[136652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afbcfbbypxktduxeqawjevascdfjwedd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090164.0587084-714-75725067324310/AnsiballZ_file.py'
Oct 10 09:56:04 compute-1 sudo[136652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:04 compute-1 python3.9[136654]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:56:04 compute-1 sudo[136652]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:04.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:05 compute-1 sudo[136804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpimenlxxuuviynwxwjcmtpeutmufpio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090164.8860343-738-84840047590739/AnsiballZ_stat.py'
Oct 10 09:56:05 compute-1 sudo[136804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:05 compute-1 python3.9[136806]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:56:05 compute-1 sudo[136804]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:05 compute-1 ceph-mon[79167]: pgmap v296: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:56:05 compute-1 sudo[136883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiazvexenflxxxfgbikuqytmzothqaex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090164.8860343-738-84840047590739/AnsiballZ_file.py'
Oct 10 09:56:05 compute-1 sudo[136883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:05 compute-1 python3.9[136885]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:56:05 compute-1 sudo[136883]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:56:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:06.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:56:06 compute-1 sudo[137035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgmdiorbjikgieptrlbqhypsslandquq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090166.197866-774-233358851735500/AnsiballZ_stat.py'
Oct 10 09:56:06 compute-1 sudo[137035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:06.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:06 compute-1 python3.9[137037]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:56:06 compute-1 sudo[137035]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:06 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Scheduled restart job, restart counter is at 3.
Oct 10 09:56:06 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:56:06 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 1.918s CPU time.
Oct 10 09:56:07 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 09:56:07 compute-1 sudo[137125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqlmoimoguvpgtspynyxsmtxtqjflzom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090166.197866-774-233358851735500/AnsiballZ_file.py'
Oct 10 09:56:07 compute-1 sudo[137125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:56:07 compute-1 python3.9[137128]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:56:07 compute-1 podman[137161]: 2025-10-10 09:56:07.307424557 +0000 UTC m=+0.059680973 container create f06251c00be534001d35bdb537d404f9774100b5ad0c3caa27f9fd4f4b4dedb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Oct 10 09:56:07 compute-1 sudo[137125]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:07 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3e619c70174e25189507790d74cd6c583ce379b86dd3dfded0cd49fbdbca08e/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 09:56:07 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3e619c70174e25189507790d74cd6c583ce379b86dd3dfded0cd49fbdbca08e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 09:56:07 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3e619c70174e25189507790d74cd6c583ce379b86dd3dfded0cd49fbdbca08e/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:56:07 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3e619c70174e25189507790d74cd6c583ce379b86dd3dfded0cd49fbdbca08e/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.mssvzx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 09:56:07 compute-1 podman[137161]: 2025-10-10 09:56:07.274892632 +0000 UTC m=+0.027149058 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 09:56:07 compute-1 podman[137161]: 2025-10-10 09:56:07.38367761 +0000 UTC m=+0.135934046 container init f06251c00be534001d35bdb537d404f9774100b5ad0c3caa27f9fd4f4b4dedb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:56:07 compute-1 podman[137161]: 2025-10-10 09:56:07.394445583 +0000 UTC m=+0.146701969 container start f06251c00be534001d35bdb537d404f9774100b5ad0c3caa27f9fd4f4b4dedb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:56:07 compute-1 bash[137161]: f06251c00be534001d35bdb537d404f9774100b5ad0c3caa27f9fd4f4b4dedb1
Oct 10 09:56:07 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 09:56:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:07 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 09:56:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:07 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 09:56:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:07 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 09:56:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:07 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 09:56:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:07 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 09:56:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:07 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 09:56:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:07 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 09:56:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:07 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 09:56:07 compute-1 ceph-mon[79167]: pgmap v297: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:56:07 compute-1 sudo[137369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izmbosfxljaheposfaidsjneghyfdche ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090167.5463145-810-120823818478881/AnsiballZ_systemd.py'
Oct 10 09:56:07 compute-1 sudo[137369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:08 compute-1 python3.9[137371]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:56:08 compute-1 systemd[1]: Reloading.
Oct 10 09:56:08 compute-1 systemd-rc-local-generator[137399]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:56:08 compute-1 systemd-sysv-generator[137402]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:56:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:56:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:08.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:56:08 compute-1 sudo[137369]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:08.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:09 compute-1 sudo[137558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhbcyaiejenbzwbcgqsknxigijbecddf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090168.8803577-834-158746510802083/AnsiballZ_stat.py'
Oct 10 09:56:09 compute-1 sudo[137558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:09 compute-1 python3.9[137560]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:56:09 compute-1 sudo[137558]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:09 compute-1 ceph-mon[79167]: pgmap v298: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:56:09 compute-1 sudo[137637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsdzfdjvrbpumllucvnfylxfjvuvptsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090168.8803577-834-158746510802083/AnsiballZ_file.py'
Oct 10 09:56:09 compute-1 sudo[137637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:10 compute-1 python3.9[137639]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:56:10 compute-1 sudo[137637]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:56:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:10.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:56:10 compute-1 sudo[137789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aztbhthfclulbuohaawdmwkiidzswyfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090170.193652-870-29401976540151/AnsiballZ_stat.py'
Oct 10 09:56:10 compute-1 sudo[137789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:56:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:10.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:56:10 compute-1 python3.9[137791]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:56:10 compute-1 sudo[137789]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:11 compute-1 sudo[137867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbfxvyfgbacqpzryxusmnmxsvennsgvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090170.193652-870-29401976540151/AnsiballZ_file.py'
Oct 10 09:56:11 compute-1 sudo[137867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:11 compute-1 python3.9[137869]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:56:11 compute-1 sudo[137867]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:11 compute-1 ceph-mon[79167]: pgmap v299: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:56:11 compute-1 sudo[138020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxohdohvdxqoqddvxnmnopanjsgmykxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090171.5735683-906-243124870285109/AnsiballZ_systemd.py'
Oct 10 09:56:11 compute-1 sudo[138020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:56:12 compute-1 python3.9[138022]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:56:12 compute-1 systemd[1]: Reloading.
Oct 10 09:56:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:12.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:12 compute-1 systemd-sysv-generator[138079]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:56:12 compute-1 systemd-rc-local-generator[138076]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:56:12 compute-1 sudo[138024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:56:12 compute-1 sudo[138024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:56:12 compute-1 sudo[138024]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:12 compute-1 systemd[1]: Starting Create netns directory...
Oct 10 09:56:12 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 10 09:56:12 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 10 09:56:12 compute-1 systemd[1]: Finished Create netns directory.
Oct 10 09:56:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:12.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:12 compute-1 sudo[138020]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:13 compute-1 sudo[138240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wehwkaitmulxtxwymnxloxxnduaeytjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090173.0560455-936-265034147186203/AnsiballZ_file.py'
Oct 10 09:56:13 compute-1 sudo[138240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:13 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 09:56:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:13 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 09:56:13 compute-1 python3.9[138242]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:56:13 compute-1 sudo[138240]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:13 compute-1 ceph-mon[79167]: pgmap v300: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:56:14 compute-1 sudo[138392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytrouygryerxwxsfmtevwpkvenlnbgmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090173.874504-960-74001828533149/AnsiballZ_stat.py'
Oct 10 09:56:14 compute-1 sudo[138392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:14.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:14 compute-1 python3.9[138394]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:56:14 compute-1 sudo[138392]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:56:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:14.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:56:14 compute-1 sudo[138515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsojfseaahytkcizqwywnvskgockkaks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090173.874504-960-74001828533149/AnsiballZ_copy.py'
Oct 10 09:56:14 compute-1 sudo[138515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:15 compute-1 python3.9[138517]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090173.874504-960-74001828533149/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:56:15 compute-1 sudo[138515]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:15 compute-1 ceph-mon[79167]: pgmap v301: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:56:15 compute-1 sudo[138668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nobombkivltplozwrqrkhmrqwnixqrid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090175.5788085-1011-219773103996823/AnsiballZ_file.py'
Oct 10 09:56:15 compute-1 sudo[138668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:16 compute-1 python3.9[138670]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:56:16 compute-1 sudo[138668]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:16.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:56:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:16.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:56:16 compute-1 sudo[138820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdknoyufyowaquyduntgapxjvyahikzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090176.4142663-1035-236382636447488/AnsiballZ_stat.py'
Oct 10 09:56:16 compute-1 sudo[138820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:56:17 compute-1 python3.9[138822]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:56:17 compute-1 sudo[138820]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:56:17 compute-1 sudo[138944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aovjzrzeyvxocwirqdplgtcvdjebzbaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090176.4142663-1035-236382636447488/AnsiballZ_copy.py'
Oct 10 09:56:17 compute-1 sudo[138944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:17 compute-1 python3.9[138946]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090176.4142663-1035-236382636447488/.source.json _original_basename=.ob3n8vt3 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:56:17 compute-1 sudo[138944]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:17 compute-1 ceph-mon[79167]: pgmap v302: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 09:56:18 compute-1 sudo[139096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hljiaegjdxvrlulnpmcffeltehdhehxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090177.9303942-1080-255750093603784/AnsiballZ_file.py'
Oct 10 09:56:18 compute-1 sudo[139096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:56:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:18.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:56:18 compute-1 python3.9[139098]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:56:18 compute-1 sudo[139096]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:18.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:19 compute-1 sudo[139248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvinijppinqppfmpxxecprmlbsrffkos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090178.8444295-1104-141425131383203/AnsiballZ_stat.py'
Oct 10 09:56:19 compute-1 sudo[139248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:19 compute-1 sudo[139248]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 09:56:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 09:56:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 09:56:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 09:56:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 09:56:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 09:56:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 09:56:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 09:56:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 09:56:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 09:56:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 09:56:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 09:56:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 09:56:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 09:56:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 09:56:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 09:56:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 09:56:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 09:56:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 09:56:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 09:56:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 09:56:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 09:56:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 09:56:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 09:56:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 09:56:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 09:56:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 09:56:19 compute-1 sudo[139385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slbelyruoigzqyrrxvmzllldiqebnvdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090178.8444295-1104-141425131383203/AnsiballZ_copy.py'
Oct 10 09:56:19 compute-1 sudo[139385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4940000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:19 compute-1 ceph-mon[79167]: pgmap v303: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:56:19 compute-1 sudo[139385]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:20 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c001240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:20.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:20.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:20 compute-1 sudo[139539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhvjpkgvufzlqsgplrogxdwcpcnkzyed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090180.3746302-1155-98390450869097/AnsiballZ_container_config_data.py'
Oct 10 09:56:20 compute-1 sudo[139539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:21 compute-1 python3.9[139541]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct 10 09:56:21 compute-1 sudo[139539]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:21 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:21 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:21 compute-1 ceph-mon[79167]: pgmap v304: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:56:21 compute-1 sudo[139692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhriuqkwflypndxlrmxblievdnjigqnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090181.3833938-1182-108899432445161/AnsiballZ_container_config_hash.py'
Oct 10 09:56:21 compute-1 sudo[139692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:22 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:22 compute-1 python3.9[139694]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 10 09:56:22 compute-1 sudo[139692]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:56:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:22.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:22.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:22 compute-1 sudo[139844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpummaqhigqfibnjgtcmidlabmnzqgsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090182.4852061-1209-227702763400378/AnsiballZ_podman_container_info.py'
Oct 10 09:56:23 compute-1 sudo[139844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:23 compute-1 python3.9[139846]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 10 09:56:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095623 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 09:56:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:23 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c001f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:23 compute-1 sudo[139844]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:23 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:23 compute-1 ceph-mon[79167]: pgmap v305: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:56:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:24 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c001f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:56:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:24.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:56:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:56:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:24.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:56:24 compute-1 sudo[140023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqvhbyawunocbiikdgxqefnxrycwopca ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760090184.267547-1248-74452305126543/AnsiballZ_edpm_container_manage.py'
Oct 10 09:56:24 compute-1 sudo[140023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:25 compute-1 python3[140025]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 10 09:56:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:25 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49140016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:25 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:25 compute-1 ceph-mon[79167]: pgmap v306: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:56:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:26 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:26.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:56:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:26.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:56:27 compute-1 sudo[140078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:56:27 compute-1 sudo[140078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:56:27 compute-1 sudo[140078]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:27 compute-1 sudo[140113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 09:56:27 compute-1 sudo[140113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:56:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:27 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c001f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:56:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:27 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49140016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:28 compute-1 ceph-mon[79167]: pgmap v307: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 09:56:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:28 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:28.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:28.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:29 compute-1 ceph-mon[79167]: pgmap v308: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Oct 10 09:56:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:29 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:29 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:30 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:30 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49140016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:56:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:30.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:56:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:30.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:31 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:31 compute-1 ceph-mon[79167]: pgmap v309: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:56:31 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:56:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:31 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:32 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:56:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:32.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:32 compute-1 sudo[140209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:56:32 compute-1 sudo[140209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:56:32 compute-1 sudo[140209]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:32.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:32 compute-1 sudo[140113]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:33 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:33 compute-1 podman[140157]: 2025-10-10 09:56:33.689829862 +0000 UTC m=+4.780948769 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 10 09:56:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:33 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:34 compute-1 ceph-mon[79167]: pgmap v310: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:56:34 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:56:34 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:56:34 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:56:34 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:56:34 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:56:34 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:56:34 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:56:34 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:34 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:34 compute-1 podman[140038]: 2025-10-10 09:56:34.14774498 +0000 UTC m=+8.951063868 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 10 09:56:34 compute-1 podman[140306]: 2025-10-10 09:56:34.379441144 +0000 UTC m=+0.067695416 container create c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 10 09:56:34 compute-1 podman[140306]: 2025-10-10 09:56:34.345999536 +0000 UTC m=+0.034253898 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 10 09:56:34 compute-1 python3[140025]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 10 09:56:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:34.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:34 compute-1 sudo[140023]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:56:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:34.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:56:35 compute-1 ceph-mon[79167]: pgmap v311: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:56:35 compute-1 sudo[140494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blfgujwvmlpsyskugunipzmfidgmnsyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090194.7678027-1272-212521310557184/AnsiballZ_stat.py'
Oct 10 09:56:35 compute-1 sudo[140494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:35 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:35 compute-1 python3.9[140496]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:56:35 compute-1 sudo[140494]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:35 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:36 compute-1 sudo[140649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhyfcvmfhqrzfthnftrwqmpnlmtwazxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090195.6575584-1299-274676689436785/AnsiballZ_file.py'
Oct 10 09:56:36 compute-1 sudo[140649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:36 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:36 compute-1 python3.9[140651]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:56:36 compute-1 sudo[140649]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:36.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:36 compute-1 sudo[140725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzuienplnmfyvnvlyqsfrnjwjlvcomei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090195.6575584-1299-274676689436785/AnsiballZ_stat.py'
Oct 10 09:56:36 compute-1 sudo[140725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:36 compute-1 python3.9[140727]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 09:56:36 compute-1 sudo[140725]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:36.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:37 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:56:37 compute-1 sudo[140877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgazxnlomdounpqutxtqbewfyytdozmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090196.8461943-1299-53411867460487/AnsiballZ_copy.py'
Oct 10 09:56:37 compute-1 sudo[140877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:37 compute-1 ceph-mon[79167]: pgmap v312: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:56:37 compute-1 python3.9[140879]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760090196.8461943-1299-53411867460487/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:56:37 compute-1 sudo[140877]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:37 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:37 compute-1 sudo[140953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvycoewfxyxlfkcdslxslelmuojaayog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090196.8461943-1299-53411867460487/AnsiballZ_systemd.py'
Oct 10 09:56:37 compute-1 sudo[140953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:38 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:38 compute-1 python3.9[140955]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 09:56:38 compute-1 systemd[1]: Reloading.
Oct 10 09:56:38 compute-1 systemd-rc-local-generator[140981]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:56:38 compute-1 systemd-sysv-generator[140987]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:56:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:38.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:38 compute-1 sudo[140953]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:38 compute-1 sudo[140991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 09:56:38 compute-1 sudo[140991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:56:38 compute-1 sudo[140991]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:56:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:38.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:56:38 compute-1 sudo[141089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijvpeeckecljzsbwomnzxgelykhlqehq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090196.8461943-1299-53411867460487/AnsiballZ_systemd.py'
Oct 10 09:56:38 compute-1 sudo[141089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:39 compute-1 python3.9[141091]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:56:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:39 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:39 compute-1 systemd[1]: Reloading.
Oct 10 09:56:39 compute-1 systemd-sysv-generator[141124]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:56:39 compute-1 systemd-rc-local-generator[141120]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:56:39 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:56:39 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:56:39 compute-1 ceph-mon[79167]: pgmap v313: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 09:56:39 compute-1 systemd[1]: Starting ovn_metadata_agent container...
Oct 10 09:56:39 compute-1 systemd[1]: Started libcrun container.
Oct 10 09:56:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b81fc82675d96858f7a747d2ddbf15ae6cb8daca49b083b0fa4c06685d283a50/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct 10 09:56:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b81fc82675d96858f7a747d2ddbf15ae6cb8daca49b083b0fa4c06685d283a50/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 09:56:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:39 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:40 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:40 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:40 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51.
Oct 10 09:56:40 compute-1 podman[141135]: 2025-10-10 09:56:40.205647321 +0000 UTC m=+0.577931075 container init c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 10 09:56:40 compute-1 ovn_metadata_agent[141151]: + sudo -E kolla_set_configs
Oct 10 09:56:40 compute-1 podman[141135]: 2025-10-10 09:56:40.24786238 +0000 UTC m=+0.620146154 container start c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:56:40 compute-1 edpm-start-podman-container[141135]: ovn_metadata_agent
Oct 10 09:56:40 compute-1 ovn_metadata_agent[141151]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 10 09:56:40 compute-1 ovn_metadata_agent[141151]: INFO:__main__:Validating config file
Oct 10 09:56:40 compute-1 ovn_metadata_agent[141151]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 10 09:56:40 compute-1 ovn_metadata_agent[141151]: INFO:__main__:Copying service configuration files
Oct 10 09:56:40 compute-1 ovn_metadata_agent[141151]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct 10 09:56:40 compute-1 ovn_metadata_agent[141151]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct 10 09:56:40 compute-1 ovn_metadata_agent[141151]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct 10 09:56:40 compute-1 ovn_metadata_agent[141151]: INFO:__main__:Writing out command to execute
Oct 10 09:56:40 compute-1 ovn_metadata_agent[141151]: INFO:__main__:Setting permission for /var/lib/neutron
Oct 10 09:56:40 compute-1 ovn_metadata_agent[141151]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct 10 09:56:40 compute-1 ovn_metadata_agent[141151]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct 10 09:56:40 compute-1 ovn_metadata_agent[141151]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct 10 09:56:40 compute-1 ovn_metadata_agent[141151]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct 10 09:56:40 compute-1 ovn_metadata_agent[141151]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct 10 09:56:40 compute-1 ovn_metadata_agent[141151]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct 10 09:56:40 compute-1 ovn_metadata_agent[141151]: ++ cat /run_command
Oct 10 09:56:40 compute-1 ovn_metadata_agent[141151]: + CMD=neutron-ovn-metadata-agent
Oct 10 09:56:40 compute-1 ovn_metadata_agent[141151]: + ARGS=
Oct 10 09:56:40 compute-1 ovn_metadata_agent[141151]: + sudo kolla_copy_cacerts
Oct 10 09:56:40 compute-1 podman[141158]: 2025-10-10 09:56:40.345637055 +0000 UTC m=+0.079718371 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 09:56:40 compute-1 edpm-start-podman-container[141134]: Creating additional drop-in dependency for "ovn_metadata_agent" (c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51)
Oct 10 09:56:40 compute-1 ovn_metadata_agent[141151]: + [[ ! -n '' ]]
Oct 10 09:56:40 compute-1 ovn_metadata_agent[141151]: + . kolla_extend_start
Oct 10 09:56:40 compute-1 ovn_metadata_agent[141151]: Running command: 'neutron-ovn-metadata-agent'
Oct 10 09:56:40 compute-1 ovn_metadata_agent[141151]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct 10 09:56:40 compute-1 ovn_metadata_agent[141151]: + umask 0022
Oct 10 09:56:40 compute-1 ovn_metadata_agent[141151]: + exec neutron-ovn-metadata-agent
Oct 10 09:56:40 compute-1 systemd[1]: Reloading.
Oct 10 09:56:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:40.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:40 compute-1 systemd-rc-local-generator[141226]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:56:40 compute-1 systemd-sysv-generator[141232]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:56:40 compute-1 systemd[1]: Started ovn_metadata_agent container.
Oct 10 09:56:40 compute-1 sudo[141089]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:40.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:41 compute-1 sshd-session[132381]: Connection closed by 192.168.122.30 port 60002
Oct 10 09:56:41 compute-1 sshd-session[132378]: pam_unix(sshd:session): session closed for user zuul
Oct 10 09:56:41 compute-1 systemd[1]: session-52.scope: Deactivated successfully.
Oct 10 09:56:41 compute-1 systemd[1]: session-52.scope: Consumed 1min 4.123s CPU time.
Oct 10 09:56:41 compute-1 systemd-logind[789]: Session 52 logged out. Waiting for processes to exit.
Oct 10 09:56:41 compute-1 systemd-logind[789]: Removed session 52.
Oct 10 09:56:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:41 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:41 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:41 compute-1 ceph-mon[79167]: pgmap v314: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:56:42 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:42 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.156 141156 INFO neutron.common.config [-] Logging enabled!
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.156 141156 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.156 141156 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.156 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.156 141156 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.157 141156 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.157 141156 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.157 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.157 141156 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.157 141156 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.157 141156 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.157 141156 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.157 141156 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.157 141156 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.158 141156 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.158 141156 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.158 141156 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.158 141156 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.158 141156 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.158 141156 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.158 141156 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.158 141156 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.158 141156 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.158 141156 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.159 141156 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.159 141156 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.159 141156 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.159 141156 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.159 141156 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.159 141156 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.159 141156 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.159 141156 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.159 141156 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.159 141156 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.160 141156 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.160 141156 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.160 141156 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.160 141156 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.160 141156 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.160 141156 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.160 141156 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.160 141156 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.160 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.160 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.161 141156 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.161 141156 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.161 141156 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.161 141156 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.161 141156 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.161 141156 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.161 141156 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.161 141156 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.161 141156 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.161 141156 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.161 141156 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.162 141156 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.162 141156 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.162 141156 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.162 141156 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.162 141156 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.162 141156 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.162 141156 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.163 141156 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.163 141156 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.163 141156 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.163 141156 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.163 141156 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.164 141156 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.164 141156 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.164 141156 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.164 141156 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.164 141156 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.164 141156 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.164 141156 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.164 141156 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.164 141156 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.165 141156 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.165 141156 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.165 141156 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.165 141156 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.165 141156 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.165 141156 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.165 141156 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.165 141156 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.165 141156 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.166 141156 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.166 141156 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.166 141156 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.166 141156 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.166 141156 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.166 141156 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.166 141156 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.166 141156 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.166 141156 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.166 141156 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.167 141156 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.167 141156 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.167 141156 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.167 141156 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.167 141156 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.167 141156 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.167 141156 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.167 141156 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.167 141156 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.167 141156 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.168 141156 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.168 141156 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.168 141156 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.168 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.168 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.168 141156 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.168 141156 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.168 141156 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.168 141156 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.169 141156 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.169 141156 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.169 141156 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.169 141156 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.169 141156 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.169 141156 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.169 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.169 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.170 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.170 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.170 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.170 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.170 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.170 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.170 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.170 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.171 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.171 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.171 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.171 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.171 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.171 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.171 141156 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.171 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.171 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.171 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.172 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.172 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.172 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.172 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.172 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.172 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.172 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.172 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.172 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.173 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.173 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.173 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.173 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.173 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.173 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.173 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.173 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.173 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.173 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.174 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.174 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.174 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.174 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.174 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.174 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.174 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.174 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.174 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.174 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.175 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.175 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.175 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.175 141156 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.175 141156 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.175 141156 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.175 141156 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.175 141156 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.175 141156 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.176 141156 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.176 141156 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.176 141156 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.176 141156 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.176 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.176 141156 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.176 141156 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.176 141156 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.176 141156 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.177 141156 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.177 141156 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.177 141156 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.177 141156 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.177 141156 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.177 141156 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.177 141156 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.177 141156 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.177 141156 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.177 141156 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.178 141156 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.178 141156 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.178 141156 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.178 141156 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.178 141156 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.178 141156 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.178 141156 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.178 141156 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.178 141156 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.178 141156 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.179 141156 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.179 141156 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.179 141156 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.179 141156 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.179 141156 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.179 141156 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.179 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.179 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.179 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.179 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.180 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.180 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.180 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.180 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.180 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.180 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.180 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.180 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.180 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.180 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.181 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.181 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.181 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.181 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.181 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.181 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.181 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.181 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.181 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.181 141156 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.182 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.182 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.182 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.182 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.182 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.182 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.182 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.182 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.182 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.183 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.183 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.183 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.183 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.183 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.183 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.183 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.183 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.183 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.183 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.184 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.184 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.184 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.184 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.184 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.184 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.184 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.184 141156 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.184 141156 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.185 141156 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.185 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.185 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.185 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.185 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.185 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.185 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.185 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.185 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.186 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.186 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.186 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.186 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.186 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.186 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.186 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.186 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.186 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.187 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.187 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.187 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.187 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.187 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.187 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.187 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.187 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.187 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.188 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.188 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.188 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.188 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.188 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.188 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.188 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.188 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.188 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.188 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.189 141156 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.189 141156 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.201 141156 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.202 141156 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.202 141156 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.202 141156 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.203 141156 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.217 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name ee0899c1-415d-4aa8-abe8-1240b4e8bf2c (UUID: ee0899c1-415d-4aa8-abe8-1240b4e8bf2c) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.239 141156 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.239 141156 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.239 141156 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.239 141156 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.242 141156 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.249 141156 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 10 09:56:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.254 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'ee0899c1-415d-4aa8-abe8-1240b4e8bf2c'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], external_ids={}, name=ee0899c1-415d-4aa8-abe8-1240b4e8bf2c, nb_cfg_timestamp=1760090136565, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.255 141156 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f9372ffcf40>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.256 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.256 141156 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.256 141156 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.256 141156 INFO oslo_service.service [-] Starting 1 workers
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.260 141156 DEBUG oslo_service.service [-] Started child 141270 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.264 141156 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmprcodsn0p/privsep.sock']
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.267 141270 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-1938146'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.298 141270 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.299 141270 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.299 141270 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.302 141270 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.309 141270 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.316 141270 INFO eventlet.wsgi.server [-] (141270) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Oct 10 09:56:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:42.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:42.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:42 compute-1 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.938 141156 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.938 141156 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmprcodsn0p/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.812 141275 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.816 141275 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.818 141275 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.819 141275 INFO oslo.privsep.daemon [-] privsep daemon running as pid 141275
Oct 10 09:56:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:42.941 141275 DEBUG oslo.privsep.daemon [-] privsep: reply[40d37dff-bf20-4809-b174-a9fccb83d19a]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 09:56:43 compute-1 ceph-mon[79167]: pgmap v315: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:56:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:43 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.453 141275 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.453 141275 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.453 141275 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 09:56:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:43 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.924 141275 DEBUG oslo.privsep.daemon [-] privsep: reply[d61bbd96-24ca-4494-83a9-e7dd9b98b03f]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.927 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=ee0899c1-415d-4aa8-abe8-1240b4e8bf2c, column=external_ids, values=({'neutron:ovn-metadata-id': 'f0896111-8589-5c53-9955-6cd3547e7998'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.944 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ee0899c1-415d-4aa8-abe8-1240b4e8bf2c, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.965 141156 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.965 141156 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.965 141156 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.965 141156 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.965 141156 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.966 141156 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.966 141156 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.966 141156 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.966 141156 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.966 141156 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.967 141156 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.967 141156 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.967 141156 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.967 141156 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.967 141156 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.968 141156 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.968 141156 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.968 141156 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.968 141156 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.968 141156 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.968 141156 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.969 141156 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.969 141156 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.969 141156 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.969 141156 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.969 141156 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.970 141156 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.970 141156 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.970 141156 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.970 141156 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.970 141156 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.971 141156 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.971 141156 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.971 141156 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.971 141156 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.972 141156 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.972 141156 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.972 141156 DEBUG oslo_service.service [-] host                           = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.972 141156 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.973 141156 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.973 141156 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.973 141156 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.973 141156 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.973 141156 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.974 141156 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.974 141156 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.974 141156 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.974 141156 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.974 141156 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.974 141156 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.974 141156 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.975 141156 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.975 141156 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.975 141156 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.975 141156 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.975 141156 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.975 141156 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.976 141156 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.976 141156 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.976 141156 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.976 141156 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.976 141156 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.976 141156 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.976 141156 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.977 141156 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.977 141156 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.977 141156 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.977 141156 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.977 141156 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.977 141156 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.978 141156 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.978 141156 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.978 141156 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.978 141156 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.978 141156 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.978 141156 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.978 141156 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.979 141156 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.979 141156 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.979 141156 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.979 141156 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.979 141156 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.980 141156 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.980 141156 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.980 141156 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.980 141156 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.980 141156 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.980 141156 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.980 141156 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.981 141156 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.981 141156 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.981 141156 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.981 141156 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.981 141156 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.981 141156 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.982 141156 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.982 141156 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.982 141156 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.982 141156 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.982 141156 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.982 141156 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.983 141156 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.983 141156 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.983 141156 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.983 141156 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.983 141156 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.983 141156 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.983 141156 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.984 141156 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.984 141156 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.984 141156 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.984 141156 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.985 141156 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.985 141156 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.985 141156 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.985 141156 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.985 141156 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.985 141156 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.986 141156 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.986 141156 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.986 141156 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.986 141156 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.986 141156 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.987 141156 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.987 141156 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.987 141156 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.987 141156 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.987 141156 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.987 141156 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.988 141156 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.988 141156 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.988 141156 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.988 141156 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.988 141156 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.988 141156 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.989 141156 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.989 141156 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.989 141156 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.989 141156 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.989 141156 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.990 141156 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.990 141156 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.990 141156 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.990 141156 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.990 141156 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.990 141156 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.990 141156 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.991 141156 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.991 141156 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.991 141156 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.991 141156 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.991 141156 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.991 141156 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.992 141156 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.992 141156 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.992 141156 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.992 141156 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.992 141156 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.992 141156 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.992 141156 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.993 141156 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.993 141156 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.993 141156 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.993 141156 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.993 141156 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.993 141156 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.994 141156 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.994 141156 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.994 141156 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.994 141156 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.994 141156 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.994 141156 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.994 141156 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.995 141156 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.995 141156 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.995 141156 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.995 141156 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.995 141156 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.995 141156 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.996 141156 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.996 141156 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.996 141156 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.996 141156 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.996 141156 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.996 141156 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.997 141156 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.997 141156 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.997 141156 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.997 141156 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.997 141156 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.997 141156 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.997 141156 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.998 141156 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.998 141156 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.998 141156 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.998 141156 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.998 141156 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.998 141156 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.999 141156 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.999 141156 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.999 141156 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.999 141156 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.999 141156 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.999 141156 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:43.999 141156 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.000 141156 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.000 141156 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.000 141156 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.000 141156 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.000 141156 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.001 141156 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.001 141156 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.001 141156 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.001 141156 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.001 141156 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.001 141156 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.001 141156 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.002 141156 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.002 141156 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.002 141156 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.002 141156 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.002 141156 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.002 141156 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.002 141156 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.003 141156 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.003 141156 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.003 141156 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.003 141156 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.003 141156 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.003 141156 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.003 141156 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.004 141156 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.004 141156 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.004 141156 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.004 141156 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.004 141156 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.004 141156 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.005 141156 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.005 141156 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.005 141156 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.005 141156 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.006 141156 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.006 141156 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.006 141156 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.006 141156 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.006 141156 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.006 141156 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.006 141156 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.007 141156 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.007 141156 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.007 141156 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.007 141156 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.007 141156 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.008 141156 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.008 141156 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.008 141156 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.008 141156 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.008 141156 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.009 141156 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.009 141156 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.009 141156 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.009 141156 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.009 141156 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.009 141156 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.010 141156 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.010 141156 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.010 141156 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.010 141156 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.010 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.010 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.011 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.011 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.011 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.011 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.011 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.012 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.012 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.012 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.012 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.012 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.012 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.013 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.013 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.013 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.013 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.013 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.014 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.014 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.014 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.014 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.014 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.014 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.015 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.015 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.015 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.015 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.015 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.015 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.016 141156 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.016 141156 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.016 141156 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.016 141156 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.016 141156 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 09:56:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:56:44.016 141156 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 10 09:56:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:44 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:44.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:44.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:45 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:45 compute-1 ceph-mon[79167]: pgmap v316: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:56:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:45 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:46 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:46 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:56:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:46.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:56:46 compute-1 sshd-session[141282]: Accepted publickey for zuul from 192.168.122.30 port 46468 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 09:56:46 compute-1 systemd-logind[789]: New session 53 of user zuul.
Oct 10 09:56:46 compute-1 systemd[1]: Started Session 53 of User zuul.
Oct 10 09:56:46 compute-1 sshd-session[141282]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 09:56:46 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:56:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:46.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:47 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:56:47 compute-1 ceph-mon[79167]: pgmap v317: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:56:47 compute-1 python3.9[141436]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 09:56:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:47 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:48 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:48.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:48 compute-1 sudo[141590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxnnivxltegeevfuowbnnzihjuvcsbcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090208.298361-63-164311577077803/AnsiballZ_command.py'
Oct 10 09:56:48 compute-1 sudo[141590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:48.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:49 compute-1 python3.9[141592]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:56:49 compute-1 sudo[141590]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:49 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:49 compute-1 ceph-mon[79167]: pgmap v318: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 09:56:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:49 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c004440 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:50 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095650 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 09:56:50 compute-1 sudo[141756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyxxuhpnipzquazdpceybbmubsobcane ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090209.551538-96-85438954486608/AnsiballZ_systemd_service.py'
Oct 10 09:56:50 compute-1 sudo[141756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:56:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:50.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:56:50 compute-1 python3.9[141758]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 09:56:50 compute-1 systemd[1]: Reloading.
Oct 10 09:56:50 compute-1 systemd-rc-local-generator[141784]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:56:50 compute-1 systemd-sysv-generator[141788]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:56:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:56:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:50.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:56:50 compute-1 sudo[141756]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:51 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:51 compute-1 python3.9[141946]: ansible-ansible.builtin.service_facts Invoked
Oct 10 09:56:51 compute-1 ceph-mon[79167]: pgmap v319: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:56:51 compute-1 network[141963]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 09:56:51 compute-1 network[141964]: 'network-scripts' will be removed from distribution in near future.
Oct 10 09:56:51 compute-1 network[141965]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 09:56:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:51 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:52 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:52 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49380013a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:56:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:52.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:56:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:52.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:56:52 compute-1 sudo[141979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:56:52 compute-1 sudo[141979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:56:52 compute-1 sudo[141979]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:53 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:53 compute-1 ceph-mon[79167]: pgmap v320: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:56:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:53 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:54 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:54 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:56:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:54.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:56:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:54.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:55 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4938001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:55 compute-1 ceph-mon[79167]: pgmap v321: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:56:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:55 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:56 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:56 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:56 compute-1 sudo[142255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qewxdqahiznxxcsyumstfqosdlqfnodi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090216.0391147-153-164084619519706/AnsiballZ_systemd_service.py'
Oct 10 09:56:56 compute-1 sudo[142255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:56.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:56 compute-1 python3.9[142257]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:56:56 compute-1 sudo[142255]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:56:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:56.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:56:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:57 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:56:57 compute-1 sudo[142409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooamdukifqqpayadvfwlnpekmeoiwmmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090216.917085-153-168252873925874/AnsiballZ_systemd_service.py'
Oct 10 09:56:57 compute-1 sudo[142409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:57 compute-1 python3.9[142411]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:56:57 compute-1 sudo[142409]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:57 compute-1 ceph-mon[79167]: pgmap v322: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:56:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:57 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4938001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:58 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:58 compute-1 sudo[142562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymcxhbinfasazmlsswghixxypbkfnaxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090217.878418-153-209696339257047/AnsiballZ_systemd_service.py'
Oct 10 09:56:58 compute-1 sudo[142562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:56:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:56:58.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:56:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:58 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 09:56:58 compute-1 python3.9[142564]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:56:58 compute-1 sudo[142562]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:56:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:56:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:56:58.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:56:59 compute-1 sudo[142715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjqtnjxyqidpldzteywqhoujfvfakgzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090218.8169343-153-160951077355215/AnsiballZ_systemd_service.py'
Oct 10 09:56:59 compute-1 sudo[142715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:56:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:59 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:59 compute-1 python3.9[142717]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:56:59 compute-1 sudo[142715]: pam_unix(sudo:session): session closed for user root
Oct 10 09:56:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:56:59 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:56:59 compute-1 ceph-mon[79167]: pgmap v323: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Oct 10 09:57:00 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:00 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4938001eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:00 compute-1 sudo[142869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvszykiylwxxnkgoeriezszgfadzpouj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090219.7387967-153-279780812146743/AnsiballZ_systemd_service.py'
Oct 10 09:57:00 compute-1 sudo[142869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:00 compute-1 python3.9[142871]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:57:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:00.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:00 compute-1 sudo[142869]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:00.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:00 compute-1 sudo[143022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cllzwpsqamkzpnrmgkhmlkppmewaheyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090220.6703916-153-124717221990686/AnsiballZ_systemd_service.py'
Oct 10 09:57:01 compute-1 sudo[143022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:01 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:01 compute-1 python3.9[143024]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:57:01 compute-1 sudo[143022]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:01 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 09:57:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:01 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 09:57:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:01 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:01 compute-1 sudo[143176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmsfdqlvqlwjocmvdyaxkvuhsdssjqty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090221.5289323-153-141667001467426/AnsiballZ_systemd_service.py'
Oct 10 09:57:01 compute-1 sudo[143176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:01 compute-1 ceph-mon[79167]: pgmap v324: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 10 09:57:01 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:57:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:02 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:02 compute-1 python3.9[143178]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 09:57:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:57:02 compute-1 sudo[143176]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:02.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:02.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:03 compute-1 sudo[143329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trrdqiwewwqulhwcpjxpsnyqtgokgran ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090222.7039812-309-89703299577309/AnsiballZ_file.py'
Oct 10 09:57:03 compute-1 sudo[143329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:03 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4938003340 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:03 compute-1 python3.9[143331]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:57:03 compute-1 sudo[143329]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:03 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:03 compute-1 ceph-mon[79167]: pgmap v325: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 09:57:04 compute-1 sudo[143482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxfbwjrhtdxtdexeihdmhtzgwosylese ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090223.6503453-309-138606859742936/AnsiballZ_file.py'
Oct 10 09:57:04 compute-1 sudo[143482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:04 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:04 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:04 compute-1 python3.9[143484]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:57:04 compute-1 sudo[143482]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:04.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:04 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:04 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 09:57:04 compute-1 sudo[143644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmmxajpnkdematiwhgwjzxfwfyetyubo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090224.4559386-309-95093656253984/AnsiballZ_file.py'
Oct 10 09:57:04 compute-1 sudo[143644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:04.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:04 compute-1 podman[143608]: 2025-10-10 09:57:04.840755151 +0000 UTC m=+0.114302489 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 10 09:57:04 compute-1 python3.9[143653]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:57:05 compute-1 sudo[143644]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:05 compute-1 ceph-mon[79167]: pgmap v326: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 09:57:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:05 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:05 compute-1 sudo[143813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epjcolkynkdlwxvejyjpxhlggaapqfil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090225.1636162-309-106765640839653/AnsiballZ_file.py'
Oct 10 09:57:05 compute-1 sudo[143813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:05 compute-1 python3.9[143815]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:57:05 compute-1 sudo[143813]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:05 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:06 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:06 compute-1 sudo[143965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcjjobvulovljrpkygxlimoalojpqrgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090225.9743047-309-144776557129493/AnsiballZ_file.py'
Oct 10 09:57:06 compute-1 sudo[143965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:06.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:06 compute-1 python3.9[143967]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:57:06 compute-1 sudo[143965]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:57:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:06.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:57:07 compute-1 sudo[144117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syowbibiinqqpbbnbohnzcznsoadsbdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090226.827513-309-101037198248607/AnsiballZ_file.py'
Oct 10 09:57:07 compute-1 sudo[144117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:07 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:57:07 compute-1 python3.9[144119]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:57:07 compute-1 sudo[144117]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:07 compute-1 ceph-mon[79167]: pgmap v327: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 09:57:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:07 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:07 compute-1 sudo[144270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffzhuzkdianuhpgdfbzyamgvzagxfhus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090227.610451-309-6133497017273/AnsiballZ_file.py'
Oct 10 09:57:07 compute-1 sudo[144270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:08 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:08 compute-1 python3.9[144272]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:57:08 compute-1 sudo[144270]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:08.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:08.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:08 compute-1 sudo[144422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjwybbormtqdqhsfljopasfbplzhbgpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090228.5241427-459-158292117243377/AnsiballZ_file.py'
Oct 10 09:57:08 compute-1 sudo[144422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:09 compute-1 python3.9[144424]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:57:09 compute-1 sudo[144422]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:09 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:09 compute-1 ceph-mon[79167]: pgmap v328: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 1023 B/s wr, 146 op/s
Oct 10 09:57:09 compute-1 sudo[144575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atvdfelpvlbwuflairnljocfooshwobq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090229.2160473-459-67766968890799/AnsiballZ_file.py'
Oct 10 09:57:09 compute-1 sudo[144575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:09 compute-1 python3.9[144577]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:57:09 compute-1 sudo[144575]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:09 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:10 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095710 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 09:57:10 compute-1 sudo[144727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oopwgtzybzkshjpfzgqwpmimspdssaty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090229.9963074-459-206887366822981/AnsiballZ_file.py'
Oct 10 09:57:10 compute-1 sudo[144727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:10 compute-1 podman[144729]: 2025-10-10 09:57:10.474035878 +0000 UTC m=+0.077173890 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 10 09:57:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:10.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:10 compute-1 python3.9[144730]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:57:10 compute-1 sudo[144727]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:10.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:11 compute-1 sudo[144898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bytlaqlerlbcddlipbuyogcguzgmpllp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090230.808411-459-127636486608941/AnsiballZ_file.py'
Oct 10 09:57:11 compute-1 sudo[144898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:11 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4938003c60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:11 compute-1 python3.9[144900]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:57:11 compute-1 sudo[144898]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:11 compute-1 ceph-mon[79167]: pgmap v329: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 938 B/s wr, 145 op/s
Oct 10 09:57:11 compute-1 sudo[145051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-firsefnqvjjshdvlfbcxityobmraoopt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090231.474021-459-138565480273654/AnsiballZ_file.py'
Oct 10 09:57:11 compute-1 sudo[145051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:11 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4938003c60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:12 compute-1 python3.9[145053]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:57:12 compute-1 sudo[145051]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:12 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:57:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:12.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:12 compute-1 sudo[145203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgrxdrhumkqwqljdjmdjqjdrgdfdlxtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090232.188486-459-135007460317993/AnsiballZ_file.py'
Oct 10 09:57:12 compute-1 sudo[145203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:12 compute-1 python3.9[145205]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:57:12 compute-1 sudo[145203]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:12.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:12 compute-1 sudo[145230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:57:12 compute-1 sudo[145230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:57:12 compute-1 sudo[145230]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:13 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:13 compute-1 sudo[145380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brzipfqfjfgsjbtutmrmzhnuuvzwrofh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090232.9400556-459-16967634676613/AnsiballZ_file.py'
Oct 10 09:57:13 compute-1 sudo[145380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:13 compute-1 python3.9[145383]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:57:13 compute-1 sudo[145380]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:13 compute-1 ceph-mon[79167]: pgmap v330: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 938 B/s wr, 145 op/s
Oct 10 09:57:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:13 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4938003c60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:14 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4938003c60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:14 compute-1 sudo[145533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opafbcusiqocgqwdmcqndevkfcereddp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090233.8986926-612-210881455667117/AnsiballZ_command.py'
Oct 10 09:57:14 compute-1 sudo[145533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:14.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:14 compute-1 python3.9[145535]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:57:14 compute-1 sudo[145533]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:14.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:15 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:15 compute-1 python3.9[145687]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 10 09:57:15 compute-1 ceph-mon[79167]: pgmap v331: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 85 B/s wr, 143 op/s
Oct 10 09:57:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:15 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:16 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:16 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:16 compute-1 sudo[145838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kawuailueuhhauiuabzakbpfpxlniqfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090235.7579281-666-120261258938910/AnsiballZ_systemd_service.py'
Oct 10 09:57:16 compute-1 sudo[145838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:16 compute-1 python3.9[145840]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 09:57:16 compute-1 systemd[1]: Reloading.
Oct 10 09:57:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:16.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:16 compute-1 systemd-sysv-generator[145870]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:57:16 compute-1 systemd-rc-local-generator[145865]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:57:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:57:16 compute-1 sudo[145838]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:57:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:16.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:57:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:17 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:57:17 compute-1 sudo[146026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cthkiytefrgndfpsyoxtzmuvlkfvgeyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090237.041136-690-265584623485127/AnsiballZ_command.py'
Oct 10 09:57:17 compute-1 sudo[146026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:17 compute-1 ceph-mon[79167]: pgmap v332: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 85 B/s wr, 143 op/s
Oct 10 09:57:17 compute-1 python3.9[146028]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:57:17 compute-1 sudo[146026]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:17 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:18 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095718 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 09:57:18 compute-1 sudo[146179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwkunjuxaswlquozcfzynkqmylrdofwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090237.8638422-690-245472340996966/AnsiballZ_command.py'
Oct 10 09:57:18 compute-1 sudo[146179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:18 compute-1 python3.9[146181]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:57:18 compute-1 sudo[146179]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:18.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:57:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:18.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:57:18 compute-1 sudo[146332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpdxvnukaagkvzfdowrsmnxqdzucmkxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090238.6243157-690-260085516311436/AnsiballZ_command.py'
Oct 10 09:57:18 compute-1 sudo[146332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:19 compute-1 python3.9[146334]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:57:19 compute-1 sudo[146332]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:19 compute-1 ceph-mon[79167]: pgmap v333: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 85 B/s wr, 143 op/s
Oct 10 09:57:19 compute-1 sudo[146486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwqckspuuoxzfnkkfeojwzgkjzhliaai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090239.3548703-690-234471924220397/AnsiballZ_command.py'
Oct 10 09:57:19 compute-1 sudo[146486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:19 compute-1 python3.9[146488]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:57:19 compute-1 sudo[146486]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:20 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:20 compute-1 sudo[146639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtojwgnpapcfdvwrqcbfnuymzhaxvlug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090240.1271033-690-243790611251804/AnsiballZ_command.py'
Oct 10 09:57:20 compute-1 sudo[146639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:20.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:20 compute-1 python3.9[146641]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:57:20 compute-1 sudo[146639]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:20.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:21 compute-1 sudo[146792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hagoxlxxmsglzdhsqrvfnxtnzjdmzirl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090240.9018602-690-71386538964807/AnsiballZ_command.py'
Oct 10 09:57:21 compute-1 sudo[146792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:21 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:21 compute-1 python3.9[146794]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:57:21 compute-1 sudo[146792]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:21 compute-1 ceph-mon[79167]: pgmap v334: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:57:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:21 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:22 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:22 compute-1 sudo[146947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsqyemdintgnwexdslleyobctawdxoji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090241.7140505-690-115296896645417/AnsiballZ_command.py'
Oct 10 09:57:22 compute-1 sudo[146947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:57:22 compute-1 python3.9[146949]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 09:57:22 compute-1 sudo[146947]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:57:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:22.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:57:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:22.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:23 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:23 compute-1 sudo[147102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juwwheuzmvvxmnjptvaigbqceihkjwyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090242.9950128-852-33772088972923/AnsiballZ_getent.py'
Oct 10 09:57:23 compute-1 sudo[147102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:23 compute-1 python3.9[147104]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct 10 09:57:23 compute-1 sudo[147102]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:23 compute-1 ceph-mon[79167]: pgmap v335: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 09:57:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:23 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:24 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:24 compute-1 sudo[147255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tadbtztmmiemmentxkcltnjgzkrdzpxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090243.9050415-876-271525482573575/AnsiballZ_group.py'
Oct 10 09:57:24 compute-1 sudo[147255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:24.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:24 compute-1 python3.9[147257]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 10 09:57:24 compute-1 groupadd[147258]: group added to /etc/group: name=libvirt, GID=42473
Oct 10 09:57:24 compute-1 groupadd[147258]: group added to /etc/gshadow: name=libvirt
Oct 10 09:57:24 compute-1 groupadd[147258]: new group: name=libvirt, GID=42473
Oct 10 09:57:24 compute-1 sudo[147255]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:24.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:25 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:25 compute-1 sudo[147414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvxcxiggnxlwbfyxvrwgprnnczrckhsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090244.9212797-900-15104328412958/AnsiballZ_user.py'
Oct 10 09:57:25 compute-1 sudo[147414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:25 compute-1 ceph-mon[79167]: pgmap v336: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:57:25 compute-1 python3.9[147416]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 10 09:57:25 compute-1 useradd[147418]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Oct 10 09:57:25 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 09:57:25 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 09:57:25 compute-1 sudo[147414]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:25 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:26 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:57:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:26.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:57:26 compute-1 sudo[147575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egbvjjjozuvtztqvcfwtxvicswkmpxit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090246.36405-933-128978086169999/AnsiballZ_setup.py'
Oct 10 09:57:26 compute-1 sudo[147575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:26.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:27 compute-1 python3.9[147577]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 09:57:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:57:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:27 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:27 compute-1 sudo[147575]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:27 compute-1 sudo[147660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qluktypnkaksgfxgeqhykmqoxgiykkpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090246.36405-933-128978086169999/AnsiballZ_dnf.py'
Oct 10 09:57:27 compute-1 sudo[147660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:57:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:27 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 09:57:27 compute-1 ceph-mon[79167]: pgmap v337: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 09:57:27 compute-1 python3.9[147662]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 09:57:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:27 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49100016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:28 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:57:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:28.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:57:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:28.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:29 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:29 compute-1 ceph-mon[79167]: pgmap v338: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Oct 10 09:57:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:29 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:30 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:30 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49100016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:30.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:30 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:30 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 09:57:30 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:30 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 09:57:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:57:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:30.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:57:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:31 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:31 compute-1 ceph-mon[79167]: pgmap v339: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 10 09:57:31 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:57:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:31 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:32 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:57:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:32.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:57:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:32.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:57:33 compute-1 sudo[147676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:57:33 compute-1 sudo[147676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:57:33 compute-1 sudo[147676]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:33 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49100016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:33 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 09:57:33 compute-1 ceph-mon[79167]: pgmap v340: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 09:57:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:33 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:34 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:34 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:34.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:34.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:35 compute-1 podman[147702]: 2025-10-10 09:57:35.018930342 +0000 UTC m=+0.123537394 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 10 09:57:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:35 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:35 compute-1 ceph-mon[79167]: pgmap v341: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 09:57:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:35 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:36 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:36.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:36.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:57:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:37 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:37 compute-1 ceph-mon[79167]: pgmap v342: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 09:57:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:37 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:38 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:57:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:38.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:57:38 compute-1 sudo[147817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:57:38 compute-1 sudo[147817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:57:38 compute-1 sudo[147817]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:38 compute-1 sudo[147844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 09:57:38 compute-1 sudo[147844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:57:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:38.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:39 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:39 compute-1 sudo[147844]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:39 compute-1 ceph-mon[79167]: pgmap v343: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 09:57:39 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:57:39 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:57:39 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:57:39 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:57:39 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:57:39 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:57:39 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:57:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:39 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:40 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:40 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:40 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/095740 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 09:57:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:40.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:57:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:40.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:57:40 compute-1 podman[147958]: 2025-10-10 09:57:40.977953006 +0000 UTC m=+0.074693736 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 10 09:57:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:41 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:41 compute-1 ceph-mon[79167]: pgmap v344: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 09:57:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:41 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:42 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:42 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:57:42.190 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 09:57:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:57:42.191 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 09:57:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:57:42.191 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 09:57:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:57:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:42.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:57:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:42.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:57:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:43 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:43 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:43 compute-1 ceph-mon[79167]: pgmap v345: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 09:57:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:44 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:57:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:44.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:57:44 compute-1 sudo[148008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 09:57:44 compute-1 sudo[148008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:57:44 compute-1 sudo[148008]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:44.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:45 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:45 compute-1 ceph-mon[79167]: pgmap v346: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 10 09:57:45 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:57:45 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:57:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:45 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:46 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:46 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:46 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:57:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:46.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:46.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:57:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:47 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:47 compute-1 ceph-mon[79167]: pgmap v347: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 10 09:57:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:47 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:48 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:48.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:48.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:49 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:49 compute-1 ceph-mon[79167]: pgmap v348: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 10 09:57:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:49 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:50 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:57:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:50.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:57:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:57:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:50.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:57:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:51 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:51 compute-1 ceph-mon[79167]: pgmap v349: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:57:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:51 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49080016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:52 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:52 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c001fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:57:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:52.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:57:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:52.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:57:53 compute-1 sudo[148044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:57:53 compute-1 sudo[148044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:57:53 compute-1 sudo[148044]: pam_unix(sudo:session): session closed for user root
Oct 10 09:57:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:53 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:53 compute-1 ceph-mon[79167]: pgmap v350: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:57:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:53 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:54 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:54 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49080016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:57:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:54.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:57:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:54.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:55 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:55 compute-1 ceph-mon[79167]: pgmap v351: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:57:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:55 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:56 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:56 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:56 compute-1 kernel: SELinux:  Converting 2768 SID table entries...
Oct 10 09:57:56 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 09:57:56 compute-1 kernel: SELinux:  policy capability open_perms=1
Oct 10 09:57:56 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 09:57:56 compute-1 kernel: SELinux:  policy capability always_check_network=0
Oct 10 09:57:56 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 09:57:56 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 09:57:56 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 09:57:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:57:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:56.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:57:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:56.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:57:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:57 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49080016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:57 compute-1 ceph-mon[79167]: pgmap v352: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:57:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:57 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:58 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:57:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:57:58.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:57:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:57:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:57:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:57:58.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:57:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:59 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:57:59 compute-1 ceph-mon[79167]: pgmap v353: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:57:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:57:59 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:00 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:00 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:00.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:00.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:01 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:01 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:01 compute-1 ceph-mon[79167]: pgmap v354: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:01 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:58:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:02 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:58:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:58:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:02.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:58:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:02.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:03 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:03 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:03 compute-1 ceph-mon[79167]: pgmap v355: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:04 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:04 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:04.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:58:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:04.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:58:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:05 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:05 compute-1 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Oct 10 09:58:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:05 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:06 compute-1 ceph-mon[79167]: pgmap v356: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:06 compute-1 podman[148088]: 2025-10-10 09:58:06.060590489 +0000 UTC m=+0.143741953 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 09:58:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:06 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:06 compute-1 kernel: SELinux:  Converting 2768 SID table entries...
Oct 10 09:58:06 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 09:58:06 compute-1 kernel: SELinux:  policy capability open_perms=1
Oct 10 09:58:06 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 09:58:06 compute-1 kernel: SELinux:  policy capability always_check_network=0
Oct 10 09:58:06 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 09:58:06 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 09:58:06 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 09:58:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:06.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:06.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:07 compute-1 ceph-mon[79167]: pgmap v357: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:58:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:07 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:07 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:08 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:08.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:58:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:08.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:58:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:09 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:09 compute-1 ceph-mon[79167]: pgmap v358: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:58:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:09 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:10 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:10.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:10.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:11 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:11 compute-1 ceph-mon[79167]: pgmap v359: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:11 compute-1 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Oct 10 09:58:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:11 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:12 compute-1 podman[148120]: 2025-10-10 09:58:12.00499698 +0000 UTC m=+0.089940597 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:58:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:12 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:58:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:58:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:12.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:58:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:58:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:12.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:58:13 compute-1 sudo[148141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:58:13 compute-1 sudo[148141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:58:13 compute-1 sudo[148141]: pam_unix(sudo:session): session closed for user root
Oct 10 09:58:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:13 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:13 compute-1 ceph-mon[79167]: pgmap v360: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:13 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:14 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:14.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:14.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:15 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:15 compute-1 ceph-mon[79167]: pgmap v361: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:15 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:16 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:16 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:16.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:58:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:16.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:58:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:17 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:17 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:18 compute-1 ceph-mon[79167]: pgmap v362: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:18 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:58:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:18.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:58:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:18.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:19 compute-1 ceph-mon[79167]: pgmap v363: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:58:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:20 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:20.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:58:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:20.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:58:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:21 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:21 compute-1 ceph-mon[79167]: pgmap v364: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:21 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:22 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:58:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:22.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:22.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:23 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:23 compute-1 ceph-mon[79167]: pgmap v365: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:23 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:24 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4938001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:24.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:24.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:25 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:25 compute-1 ceph-mon[79167]: pgmap v366: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:26 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:26 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:58:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:26.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:58:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:58:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:26.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:58:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:58:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:27 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4938001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:27 compute-1 ceph-mon[79167]: pgmap v367: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:28 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4938001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:28 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:58:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:28.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:58:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:28.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:29 compute-1 ceph-mon[79167]: pgmap v368: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:58:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:29 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4904000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:30 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:30 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:30 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:30 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4938001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:30.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:58:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:30.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:58:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:31 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910001dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:31 compute-1 ceph-mon[79167]: pgmap v369: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:31 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:58:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:32 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49040016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:32 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:58:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:58:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:32.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:58:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:32.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:33 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4938001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:33 compute-1 sudo[155243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:58:33 compute-1 sudo[155243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:58:33 compute-1 sudo[155243]: pam_unix(sudo:session): session closed for user root
Oct 10 09:58:33 compute-1 ceph-mon[79167]: pgmap v370: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:34 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:34 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910001dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:34 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:34 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49040016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:34.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:58:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:34.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:58:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:35 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:36 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4938001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:36 compute-1 ceph-mon[79167]: pgmap v371: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:36 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910001dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:36.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:36.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:37 compute-1 podman[156807]: 2025-10-10 09:58:37.040584815 +0000 UTC m=+0.131545050 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 09:58:37 compute-1 ceph-mon[79167]: pgmap v372: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:58:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:37 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49040016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:38 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:38 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4938001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:38.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:38.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:39 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910001dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:39 compute-1 ceph-mon[79167]: pgmap v373: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:58:40 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:40 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910001dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:40 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:40 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003e10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:40.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 09:58:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:40.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 09:58:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:41 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49380046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:41 compute-1 ceph-mon[79167]: pgmap v374: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:42 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:42 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910001dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:58:42.191 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 09:58:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:58:42.192 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 09:58:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:58:42.193 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 09:58:42 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:42 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4904002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:58:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:58:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:42.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:58:42 compute-1 podman[159433]: 2025-10-10 09:58:42.953419795 +0000 UTC m=+0.060629700 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:58:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:58:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:42.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:58:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:43 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:43 compute-1 ceph-mon[79167]: pgmap v375: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:44 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49380046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:44 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49380046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:44.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:44 compute-1 sudo[160326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:58:44 compute-1 sudo[160326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:58:44 compute-1 sudo[160326]: pam_unix(sudo:session): session closed for user root
Oct 10 09:58:44 compute-1 sudo[160396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 10 09:58:44 compute-1 sudo[160396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:58:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:44.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:45 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49380046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:45 compute-1 podman[160851]: 2025-10-10 09:58:45.494202528 +0000 UTC m=+0.059364199 container exec 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 09:58:45 compute-1 podman[160851]: 2025-10-10 09:58:45.590601494 +0000 UTC m=+0.155763155 container exec_died 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 10 09:58:45 compute-1 ceph-mon[79167]: pgmap v376: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:45 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 10 09:58:46 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:46 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:46 compute-1 podman[161332]: 2025-10-10 09:58:46.067266233 +0000 UTC m=+0.055392411 container exec db44b00524aaf85460d5de4878ad14e99cea939212a287f5217a7de1b31f7321 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:58:46 compute-1 podman[161332]: 2025-10-10 09:58:46.073195316 +0000 UTC m=+0.061321494 container exec_died db44b00524aaf85460d5de4878ad14e99cea939212a287f5217a7de1b31f7321 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 09:58:46 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:46 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49380046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:46 compute-1 podman[161638]: 2025-10-10 09:58:46.456013001 +0000 UTC m=+0.090477604 container exec f06251c00be534001d35bdb537d404f9774100b5ad0c3caa27f9fd4f4b4dedb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct 10 09:58:46 compute-1 podman[161638]: 2025-10-10 09:58:46.481759307 +0000 UTC m=+0.116223860 container exec_died f06251c00be534001d35bdb537d404f9774100b5ad0c3caa27f9fd4f4b4dedb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 09:58:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:46.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:46 compute-1 podman[161857]: 2025-10-10 09:58:46.795209418 +0000 UTC m=+0.071048651 container exec 7ce1be4884f313700b9f3fe6c66e14b163c4d6bf31089a45b751a6389f8be562 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw)
Oct 10 09:58:46 compute-1 podman[161857]: 2025-10-10 09:58:46.81167835 +0000 UTC m=+0.087517533 container exec_died 7ce1be4884f313700b9f3fe6c66e14b163c4d6bf31089a45b751a6389f8be562 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw)
Oct 10 09:58:46 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:58:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:58:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:46.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:58:47 compute-1 podman[162058]: 2025-10-10 09:58:47.075726065 +0000 UTC m=+0.065785466 container exec 8efe4c511a9ca69c4fbe77af128c823fd37bedd1e530d3bfb1fc1044e25a5138 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., release=1793, vcs-type=git, build-date=2023-02-22T09:23:20, name=keepalived)
Oct 10 09:58:47 compute-1 podman[162058]: 2025-10-10 09:58:47.098776458 +0000 UTC m=+0.088835829 container exec_died 8efe4c511a9ca69c4fbe77af128c823fd37bedd1e530d3bfb1fc1044e25a5138 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, io.openshift.tags=Ceph keepalived, release=1793, vcs-type=git, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, version=2.2.4, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph)
Oct 10 09:58:47 compute-1 sudo[160396]: pam_unix(sudo:session): session closed for user root
Oct 10 09:58:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:58:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:47 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003e70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:47 compute-1 sudo[162368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 09:58:47 compute-1 sudo[162368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:58:47 compute-1 sudo[162368]: pam_unix(sudo:session): session closed for user root
Oct 10 09:58:47 compute-1 sudo[162435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 09:58:47 compute-1 sudo[162435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:58:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:48 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4914002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:48 compute-1 ceph-mon[79167]: pgmap v377: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:48 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:58:48 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:58:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:48 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910001f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:48 compute-1 sudo[162435]: pam_unix(sudo:session): session closed for user root
Oct 10 09:58:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:48.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:48.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:49 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49380046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:49 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 10 09:58:49 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:58:49 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:58:49 compute-1 ceph-mon[79167]: pgmap v378: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:58:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:50 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003e90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:50 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003e90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:50.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:50.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:51 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003e90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:51 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 10 09:58:51 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:58:51 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 09:58:51 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:58:52 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:52 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4904002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:52 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:52 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:58:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:52.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:52 compute-1 ceph-mon[79167]: pgmap v379: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:52 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:58:52 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 09:58:52 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 09:58:52 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 09:58:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:58:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:52.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:58:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:53 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:53 compute-1 sudo[165199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:58:53 compute-1 sudo[165199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:58:53 compute-1 sudo[165199]: pam_unix(sudo:session): session closed for user root
Oct 10 09:58:54 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:54 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:54 compute-1 ceph-mon[79167]: pgmap v380: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:54 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:54 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4904002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:58:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:54.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:58:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:54.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:55 compute-1 ceph-mon[79167]: pgmap v381: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:55 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:56 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:56 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49380046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:56 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:56 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934001080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:56.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:58:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:56.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:58:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:58:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:57 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4904003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:58 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:58 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49380046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:58:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:58:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:58:58.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:58:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:58:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:58:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:58:59.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:58:59 compute-1 ceph-mon[79167]: pgmap v382: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:58:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:58:59 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934001080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:00 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:00 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4904003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:00 compute-1 ceph-mon[79167]: pgmap v383: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:59:00 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:00 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:00.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:01.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:01 compute-1 ceph-mon[79167]: pgmap v384: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:01 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49380046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:02 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934001080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:02 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4904003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:02 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:59:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:59:02 compute-1 sudo[165613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 09:59:02 compute-1 sudo[165613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:59:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:02 compute-1 sudo[165613]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:02.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:59:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:03.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:59:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:03 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:03 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:59:03 compute-1 ceph-mon[79167]: pgmap v385: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:03 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 09:59:04 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:04 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49380046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:04 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:04 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934003880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:04.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:59:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:05.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:59:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:05 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4904003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:05 compute-1 ceph-mon[79167]: pgmap v386: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:06 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:06 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49380046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:06.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:07.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:07 compute-1 ceph-mon[79167]: pgmap v387: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:07 compute-1 kernel: SELinux:  Converting 2769 SID table entries...
Oct 10 09:59:07 compute-1 kernel: SELinux:  policy capability network_peer_controls=1
Oct 10 09:59:07 compute-1 kernel: SELinux:  policy capability open_perms=1
Oct 10 09:59:07 compute-1 kernel: SELinux:  policy capability extended_socket_class=1
Oct 10 09:59:07 compute-1 kernel: SELinux:  policy capability always_check_network=0
Oct 10 09:59:07 compute-1 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 10 09:59:07 compute-1 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 10 09:59:07 compute-1 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 10 09:59:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:59:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:07 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934003880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:07 compute-1 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Oct 10 09:59:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:08 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4904003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:08 compute-1 podman[165649]: 2025-10-10 09:59:08.088883449 +0000 UTC m=+0.176985278 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 09:59:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:08 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:08 compute-1 groupadd[165679]: group added to /etc/group: name=dnsmasq, GID=992
Oct 10 09:59:08 compute-1 groupadd[165679]: group added to /etc/gshadow: name=dnsmasq
Oct 10 09:59:08 compute-1 groupadd[165679]: new group: name=dnsmasq, GID=992
Oct 10 09:59:08 compute-1 useradd[165686]: new user: name=dnsmasq, UID=992, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Oct 10 09:59:08 compute-1 dbus-broker-launch[759]: Noticed file-system modification, trigger reload.
Oct 10 09:59:08 compute-1 dbus-broker-launch[759]: Noticed file-system modification, trigger reload.
Oct 10 09:59:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:59:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:08.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:59:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:09.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:09 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49380046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:09 compute-1 groupadd[165700]: group added to /etc/group: name=clevis, GID=991
Oct 10 09:59:09 compute-1 groupadd[165700]: group added to /etc/gshadow: name=clevis
Oct 10 09:59:09 compute-1 groupadd[165700]: new group: name=clevis, GID=991
Oct 10 09:59:09 compute-1 useradd[165707]: new user: name=clevis, UID=991, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Oct 10 09:59:09 compute-1 usermod[165717]: add 'clevis' to group 'tss'
Oct 10 09:59:09 compute-1 usermod[165717]: add 'clevis' to shadow group 'tss'
Oct 10 09:59:09 compute-1 ceph-mon[79167]: pgmap v388: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:59:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:10 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934004590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:10 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4904003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:59:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:10.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:59:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:11.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:11 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4904003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:11 compute-1 ceph-mon[79167]: pgmap v389: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:11 compute-1 polkitd[6374]: Reloading rules
Oct 10 09:59:11 compute-1 polkitd[6374]: Collecting garbage unconditionally...
Oct 10 09:59:11 compute-1 polkitd[6374]: Loading rules from directory /etc/polkit-1/rules.d
Oct 10 09:59:11 compute-1 polkitd[6374]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 10 09:59:11 compute-1 polkitd[6374]: Finished loading, compiling and executing 4 rules
Oct 10 09:59:11 compute-1 polkitd[6374]: Reloading rules
Oct 10 09:59:11 compute-1 polkitd[6374]: Collecting garbage unconditionally...
Oct 10 09:59:11 compute-1 polkitd[6374]: Loading rules from directory /etc/polkit-1/rules.d
Oct 10 09:59:11 compute-1 polkitd[6374]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 10 09:59:11 compute-1 polkitd[6374]: Finished loading, compiling and executing 4 rules
Oct 10 09:59:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:12 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49380046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:12 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49380046d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:59:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:59:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:12.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:59:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:13.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:13 compute-1 groupadd[165905]: group added to /etc/group: name=ceph, GID=167
Oct 10 09:59:13 compute-1 groupadd[165905]: group added to /etc/gshadow: name=ceph
Oct 10 09:59:13 compute-1 groupadd[165905]: new group: name=ceph, GID=167
Oct 10 09:59:13 compute-1 useradd[165918]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Oct 10 09:59:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:13 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4904003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:13 compute-1 podman[165906]: 2025-10-10 09:59:13.408722643 +0000 UTC m=+0.116267242 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 10 09:59:13 compute-1 sudo[165938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:59:13 compute-1 sudo[165938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:59:13 compute-1 sudo[165938]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:14 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:14 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:14 compute-1 ceph-mon[79167]: pgmap v390: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:14.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:15.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:15 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:15 compute-1 ceph-mon[79167]: pgmap v391: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:16 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:16 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:16 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:16 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:59:16 compute-1 systemd[1]: Stopping OpenSSH server daemon...
Oct 10 09:59:16 compute-1 sshd[1006]: Received signal 15; terminating.
Oct 10 09:59:16 compute-1 systemd[1]: sshd.service: Deactivated successfully.
Oct 10 09:59:16 compute-1 systemd[1]: Stopped OpenSSH server daemon.
Oct 10 09:59:16 compute-1 systemd[1]: sshd.service: Consumed 4.543s CPU time, read 0B from disk, written 44.0K to disk.
Oct 10 09:59:16 compute-1 systemd[1]: Stopped target sshd-keygen.target.
Oct 10 09:59:16 compute-1 systemd[1]: Stopping sshd-keygen.target...
Oct 10 09:59:16 compute-1 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 10 09:59:16 compute-1 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 10 09:59:16 compute-1 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 10 09:59:16 compute-1 systemd[1]: Reached target sshd-keygen.target.
Oct 10 09:59:16 compute-1 systemd[1]: Starting OpenSSH server daemon...
Oct 10 09:59:16 compute-1 sshd[166604]: Server listening on 0.0.0.0 port 22.
Oct 10 09:59:16 compute-1 sshd[166604]: Server listening on :: port 22.
Oct 10 09:59:16 compute-1 systemd[1]: Started OpenSSH server daemon.
Oct 10 09:59:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:16.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:17.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:59:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:17 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c0014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:17 compute-1 ceph-mon[79167]: pgmap v392: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:18 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:18 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908003ed0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:18.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:19.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:19 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 10 09:59:19 compute-1 systemd[1]: Starting man-db-cache-update.service...
Oct 10 09:59:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:19 compute-1 systemd[1]: Reloading.
Oct 10 09:59:19 compute-1 systemd-rc-local-generator[166861]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:59:19 compute-1 systemd-sysv-generator[166866]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:59:19 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 10 09:59:19 compute-1 ceph-mon[79167]: pgmap v393: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:59:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:20 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c0014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:20 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:20.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:21.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:21 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:21 compute-1 systemd[1]: Starting PackageKit Daemon...
Oct 10 09:59:21 compute-1 PackageKit[168620]: daemon start
Oct 10 09:59:21 compute-1 systemd[1]: Started PackageKit Daemon.
Oct 10 09:59:21 compute-1 sudo[147660]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:22 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:22 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c0014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:59:22 compute-1 ceph-mon[79167]: pgmap v394: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:59:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:22.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:59:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:59:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:23.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:59:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:23 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:23 compute-1 ceph-mon[79167]: pgmap v395: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:24 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:24 compute-1 sudo[171458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnoffpopxlvfbojhxjyylnxyxwkweeep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090363.60175-969-233002627805244/AnsiballZ_systemd.py'
Oct 10 09:59:24 compute-1 sudo[171458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:24 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:24 compute-1 python3.9[171487]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 09:59:24 compute-1 systemd[1]: Reloading.
Oct 10 09:59:24 compute-1 systemd-rc-local-generator[171871]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:59:24 compute-1 systemd-sysv-generator[171878]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:59:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:24.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:24 compute-1 sudo[171458]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:59:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:25.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:59:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:25 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c0014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:25 compute-1 sudo[172671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtwtsjvghqdifngjexeaifdbdzccoran ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090365.0668356-969-97743294248711/AnsiballZ_systemd.py'
Oct 10 09:59:25 compute-1 sudo[172671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:25 compute-1 ceph-mon[79167]: pgmap v396: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:25 compute-1 python3.9[172703]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 09:59:25 compute-1 systemd[1]: Reloading.
Oct 10 09:59:25 compute-1 systemd-rc-local-generator[173140]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:59:25 compute-1 systemd-sysv-generator[173150]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:59:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:26 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:26 compute-1 sudo[172671]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:26 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:26 compute-1 sudo[173854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuoelzmuxavrdfbxbitrvvfcsburrclp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090366.3650641-969-248214637571162/AnsiballZ_systemd.py'
Oct 10 09:59:26 compute-1 sudo[173854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:26.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:26 compute-1 python3.9[173875]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 09:59:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:59:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:27.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:59:27 compute-1 systemd[1]: Reloading.
Oct 10 09:59:27 compute-1 systemd-sysv-generator[174332]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:59:27 compute-1 systemd-rc-local-generator[174326]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:59:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:59:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:27 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:27 compute-1 sudo[173854]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:27 compute-1 ceph-mon[79167]: pgmap v397: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:27 compute-1 sudo[175109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlmxwihgozboerfjulalbfhcbxvszcyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090367.5838387-969-241880156083861/AnsiballZ_systemd.py'
Oct 10 09:59:27 compute-1 sudo[175109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:28 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c0014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:28 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:28 compute-1 python3.9[175134]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 09:59:28 compute-1 systemd[1]: Reloading.
Oct 10 09:59:28 compute-1 systemd-sysv-generator[175445]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:59:28 compute-1 systemd-rc-local-generator[175441]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:59:28 compute-1 sudo[175109]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:59:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:28.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:59:28 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 10 09:59:28 compute-1 systemd[1]: Finished man-db-cache-update.service.
Oct 10 09:59:28 compute-1 systemd[1]: man-db-cache-update.service: Consumed 12.275s CPU time.
Oct 10 09:59:28 compute-1 systemd[1]: run-r0b44768244b348e2aa04d49e38e8b22d.service: Deactivated successfully.
Oct 10 09:59:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:29.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:29 compute-1 sudo[176024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czjprsxnhpcteenpgwtyadwozmbixdgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090368.8880286-1056-153588297419987/AnsiballZ_systemd.py'
Oct 10 09:59:29 compute-1 sudo[176024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:29 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:29 compute-1 python3.9[176026]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:29 compute-1 systemd[1]: Reloading.
Oct 10 09:59:29 compute-1 systemd-rc-local-generator[176058]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:59:29 compute-1 systemd-sysv-generator[176061]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:59:29 compute-1 ceph-mon[79167]: pgmap v398: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:59:29 compute-1 sudo[176024]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:30 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:30 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:30 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:30 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c0014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:30 compute-1 sudo[176215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvvyvnyoxyeqrpqequxfphwuxducjyyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090370.0360599-1056-38168501694994/AnsiballZ_systemd.py'
Oct 10 09:59:30 compute-1 sudo[176215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:30 compute-1 python3.9[176217]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:30.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:31.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:31 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:31 compute-1 ceph-mon[79167]: pgmap v399: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:31 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:59:31 compute-1 systemd[1]: Reloading.
Oct 10 09:59:31 compute-1 systemd-rc-local-generator[176247]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:59:31 compute-1 systemd-sysv-generator[176251]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:59:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:32 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c003710 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:32 compute-1 sudo[176215]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:32 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:59:32 compute-1 sudo[176405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rscukniqsckdrozkzeksyzxdbawecqrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090372.3121965-1056-254992435588148/AnsiballZ_systemd.py'
Oct 10 09:59:32 compute-1 sudo[176405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:59:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:32.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:59:32 compute-1 python3.9[176407]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:33.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:33 compute-1 systemd[1]: Reloading.
Oct 10 09:59:33 compute-1 systemd-rc-local-generator[176437]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:59:33 compute-1 systemd-sysv-generator[176441]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:59:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:33 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c0014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:33 compute-1 sudo[176405]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:33 compute-1 sudo[176498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:59:33 compute-1 sudo[176498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:59:33 compute-1 sudo[176498]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:33 compute-1 ceph-mon[79167]: pgmap v400: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:33 compute-1 sudo[176621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oynuqoyrujifxdcyutmlojnsrdsaaofi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090373.6137023-1056-126222086623412/AnsiballZ_systemd.py'
Oct 10 09:59:33 compute-1 sudo[176621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:34 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:34 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:34 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:34 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c004420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:34 compute-1 python3.9[176623]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:34 compute-1 sudo[176621]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:59:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:34.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:59:34 compute-1 sudo[176776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctcjhmvguupoeasvdlyznxnzsiwomxvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090374.5943766-1056-136455226061586/AnsiballZ_systemd.py'
Oct 10 09:59:34 compute-1 sudo[176776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:35.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:35 compute-1 python3.9[176778]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:35 compute-1 systemd[1]: Reloading.
Oct 10 09:59:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:35 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:35 compute-1 systemd-rc-local-generator[176805]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:59:35 compute-1 systemd-sysv-generator[176810]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:59:35 compute-1 sudo[176776]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:35 compute-1 ceph-mon[79167]: pgmap v401: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:36 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:36 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:36 compute-1 sudo[176967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdqysoouxxjuvhrqqxixdhpvogfnuhqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090375.9619436-1164-116638738303825/AnsiballZ_systemd.py'
Oct 10 09:59:36 compute-1 sudo[176967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:36 compute-1 python3.9[176969]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 10 09:59:36 compute-1 systemd[1]: Reloading.
Oct 10 09:59:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:36.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:36 compute-1 systemd-sysv-generator[177000]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 09:59:36 compute-1 systemd-rc-local-generator[176996]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 09:59:37 compute-1 systemd[1]: Listening on libvirt proxy daemon socket.
Oct 10 09:59:37 compute-1 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct 10 09:59:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:37.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:37 compute-1 sudo[176967]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:59:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:37 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c004420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:37 compute-1 ceph-mon[79167]: pgmap v402: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:38 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c004420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:38 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:38 compute-1 sudo[177172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oieubavwmgxaulferggvucecztleplpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090378.3809009-1188-70011820273837/AnsiballZ_systemd.py'
Oct 10 09:59:38 compute-1 sudo[177172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:38.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:38 compute-1 podman[177134]: 2025-10-10 09:59:38.831727531 +0000 UTC m=+0.155335152 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 10 09:59:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:39.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:39 compute-1 python3.9[177180]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:39 compute-1 sudo[177172]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:39 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:39 compute-1 sudo[177343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsszjtggtlcfdtbsphfxaidckjrfhjui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090379.357163-1188-181952026519167/AnsiballZ_systemd.py'
Oct 10 09:59:39 compute-1 sudo[177343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:39 compute-1 ceph-mon[79167]: pgmap v403: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:59:40 compute-1 python3.9[177345]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:40 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:40 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c004420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:40 compute-1 sudo[177343]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:40 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:40 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c0014d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:40.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:40 compute-1 sudo[177498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbgvidykpsdfrbtjggicdesmlksimdlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090380.5259373-1188-158451294560813/AnsiballZ_systemd.py'
Oct 10 09:59:40 compute-1 sudo[177498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:41.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:41 compute-1 python3.9[177500]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:41 compute-1 sudo[177498]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:41 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:41 compute-1 sudo[177654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwnysbyvgpdlllzzkxrbmwoolxtijzxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090381.4186404-1188-189355523767711/AnsiballZ_systemd.py'
Oct 10 09:59:41 compute-1 sudo[177654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:41 compute-1 ceph-mon[79167]: pgmap v404: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:42 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:42 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:42 compute-1 python3.9[177656]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:42 compute-1 sudo[177654]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:59:42.192 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 09:59:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:59:42.193 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 09:59:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 09:59:42.193 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 09:59:42 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:42 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c004420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:59:42 compute-1 sudo[177809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaygkmwnohwzltiswscienmsaljtxrfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090382.3391654-1188-223505726503636/AnsiballZ_systemd.py'
Oct 10 09:59:42 compute-1 sudo[177809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:42.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:42 compute-1 python3.9[177811]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:43.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:43 compute-1 sudo[177809]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:43 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c0044a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:43 compute-1 podman[177939]: 2025-10-10 09:59:43.640689038 +0000 UTC m=+0.054489037 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 10 09:59:43 compute-1 sudo[177984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfolsluwtvhrdwkxsdfgreoghgaxaucf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090383.2714086-1188-10423879591250/AnsiballZ_systemd.py'
Oct 10 09:59:43 compute-1 sudo[177984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:43 compute-1 ceph-mon[79167]: pgmap v405: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:43 compute-1 python3.9[177986]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:44 compute-1 sudo[177984]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:44 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:44 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:44 compute-1 sudo[178139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltzodbrogonqzexcnmywckwbalkxxhbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090384.232544-1188-122310636234704/AnsiballZ_systemd.py'
Oct 10 09:59:44 compute-1 sudo[178139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:44.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:44 compute-1 python3.9[178141]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:45 compute-1 sudo[178139]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:59:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:45.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:59:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:45 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c004420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:45 compute-1 sudo[178295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rshogomgvqrgjtzbvrsihtjwvxxhyzqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090385.1899638-1188-225496148817738/AnsiballZ_systemd.py'
Oct 10 09:59:45 compute-1 sudo[178295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:45 compute-1 ceph-mon[79167]: pgmap v406: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:45 compute-1 python3.9[178297]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:46 compute-1 sudo[178295]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:46 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:46 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c0044a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:46 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:46 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_21] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:46 compute-1 sudo[178450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxashqgqgvnqvbukxztybdbwqkwbdoup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090386.2093863-1188-257636290978942/AnsiballZ_systemd.py'
Oct 10 09:59:46 compute-1 sudo[178450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:46.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:46 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 09:59:46 compute-1 python3.9[178452]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:47 compute-1 sudo[178450]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:59:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:47.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:59:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:59:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:47 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:47 compute-1 sudo[178608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqlzjzpimzypusfmklodxgkuozsxoksw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090387.2139049-1188-245840101182704/AnsiballZ_systemd.py'
Oct 10 09:59:47 compute-1 sudo[178608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:47 compute-1 python3.9[178610]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:47 compute-1 ceph-mon[79167]: pgmap v407: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:48 compute-1 sudo[178608]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:48 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c004420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:48 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:48 compute-1 sudo[178763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czlukolbzlmjrkltsxpnmcrhasfnuhwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090388.175931-1188-202226528703899/AnsiballZ_systemd.py'
Oct 10 09:59:48 compute-1 sudo[178763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:48.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:48 compute-1 python3.9[178765]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:48 compute-1 sudo[178763]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 09:59:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:49.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 09:59:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:49 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:49 compute-1 sudo[178919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yarqrhvlewxyzqaijqxoivqzaigxfvtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090389.2687664-1188-133349647223210/AnsiballZ_systemd.py'
Oct 10 09:59:49 compute-1 sudo[178919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:49 compute-1 ceph-mon[79167]: pgmap v408: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 09:59:49 compute-1 python3.9[178921]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:50 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:50 compute-1 sudo[178919]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:50 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c004420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:50 compute-1 sudo[179074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvqvpsntcezuqauizapsegljdoykovon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090390.27653-1188-261998557930955/AnsiballZ_systemd.py'
Oct 10 09:59:50 compute-1 sudo[179074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:50.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:50 compute-1 python3.9[179076]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:51 compute-1 sudo[179074]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 09:59:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:51.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 09:59:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:51 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:51 compute-1 sudo[179230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdfsktlzytkiewcsvxntnnznovclaemh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090391.2381432-1188-86638652991912/AnsiballZ_systemd.py'
Oct 10 09:59:51 compute-1 sudo[179230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:51 compute-1 python3.9[179232]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 10 09:59:51 compute-1 ceph-mon[79167]: pgmap v409: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:52 compute-1 sudo[179230]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:52 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:52 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:52 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:52 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:59:52 compute-1 sudo[179385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwwxvrgvnxdzgvjhmzebjttmrsyorksn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090392.487272-1494-137703387050331/AnsiballZ_file.py'
Oct 10 09:59:52 compute-1 sudo[179385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:52.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:52 compute-1 python3.9[179387]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:59:52 compute-1 sudo[179385]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:53.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:53 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c004420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:53 compute-1 sudo[179538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcgiyozofgbibaewfouvccnmepwfomnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090393.1607957-1494-153655193439229/AnsiballZ_file.py'
Oct 10 09:59:53 compute-1 sudo[179538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:53 compute-1 python3.9[179540]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:59:53 compute-1 sudo[179538]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:53 compute-1 sudo[179562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 09:59:53 compute-1 sudo[179562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 09:59:53 compute-1 sudo[179562]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:54 compute-1 ceph-mon[79167]: pgmap v410: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:54 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:54 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934003490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:54 compute-1 sudo[179715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kooyrevfjqqdgdixabrvvaptkykwlibq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090393.811123-1494-173194974821590/AnsiballZ_file.py'
Oct 10 09:59:54 compute-1 sudo[179715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:54 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:54 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:54 compute-1 python3.9[179717]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:59:54 compute-1 sudo[179715]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:54.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:54 compute-1 sudo[179867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnajzutknmrpnypgxjqokjanivywwplx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090394.5050404-1494-10604107723620/AnsiballZ_file.py'
Oct 10 09:59:54 compute-1 sudo[179867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:55 compute-1 python3.9[179869]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:59:55 compute-1 sudo[179867]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:55.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:55 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:55 compute-1 sudo[180020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glppiwiatrgnlleqyptgszfatneyewql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090395.209981-1494-179928403670402/AnsiballZ_file.py'
Oct 10 09:59:55 compute-1 sudo[180020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:55 compute-1 python3.9[180022]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:59:55 compute-1 sudo[180020]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:56 compute-1 ceph-mon[79167]: pgmap v411: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:56 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:56 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c004420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:56 compute-1 sudo[180172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqalxoqracjkngdqdsxujglrucsakzbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090395.9611008-1494-204235894312889/AnsiballZ_file.py'
Oct 10 09:59:56 compute-1 sudo[180172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:56 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:56 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:56 compute-1 python3.9[180174]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 09:59:56 compute-1 sudo[180172]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:56.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 09:59:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:57.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 09:59:57 compute-1 sudo[180324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xadxhyaeruplviustbcdgqvsjjpfpsnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090396.8007727-1623-238081130552531/AnsiballZ_stat.py'
Oct 10 09:59:57 compute-1 sudo[180324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 09:59:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:57 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:57 compute-1 python3.9[180326]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:59:57 compute-1 sudo[180324]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:58 compute-1 ceph-mon[79167]: pgmap v412: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 09:59:58 compute-1 sudo[180450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvhujijlspxoawndhodoackltlpbaqkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090396.8007727-1623-238081130552531/AnsiballZ_copy.py'
Oct 10 09:59:58 compute-1 sudo[180450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:58 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:58 compute-1 python3.9[180452]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760090396.8007727-1623-238081130552531/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:59:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:58 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c004420 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:58 compute-1 sudo[180450]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 09:59:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:09:59:58.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 09:59:58 compute-1 sudo[180602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rexuavlcbxuxupoodcpsuksgmbtijygs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090398.5038142-1623-154772745150641/AnsiballZ_stat.py'
Oct 10 09:59:58 compute-1 sudo[180602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:59 compute-1 python3.9[180604]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 09:59:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 09:59:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 09:59:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:09:59:59.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 09:59:59 compute-1 sudo[180602]: pam_unix(sudo:session): session closed for user root
Oct 10 09:59:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 09:59:59 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 09:59:59 compute-1 sudo[180728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhicgrwkzqstznussmsvoirfgekthqxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090398.5038142-1623-154772745150641/AnsiballZ_copy.py'
Oct 10 09:59:59 compute-1 sudo[180728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 09:59:59 compute-1 python3.9[180730]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760090398.5038142-1623-154772745150641/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 09:59:59 compute-1 sudo[180728]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:00 compute-1 ceph-mon[79167]: pgmap v413: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:00:00 compute-1 ceph-mon[79167]: overall HEALTH_OK
Oct 10 10:00:00 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:00 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:00 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:00 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:00 compute-1 sudo[180880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-keqipcbetfhjasqichvzkuzwlgstbiww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090399.957148-1623-23640111150468/AnsiballZ_stat.py'
Oct 10 10:00:00 compute-1 sudo[180880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:00 compute-1 python3.9[180882]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:00 compute-1 sudo[180880]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:00.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:00 compute-1 sudo[181005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvmvdawdhnschejfcjqrbmsvbkgqmaei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090399.957148-1623-23640111150468/AnsiballZ_copy.py'
Oct 10 10:00:00 compute-1 sudo[181005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:00:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:01.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:00:01 compute-1 ceph-mon[79167]: pgmap v414: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:00:01 compute-1 python3.9[181007]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760090399.957148-1623-23640111150468/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:01 compute-1 sudo[181005]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:01 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c00c0e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:01 compute-1 sudo[181158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbtrpluwmjjtcsirqfgyyhymbheuvdyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090401.3041759-1623-236170911593740/AnsiballZ_stat.py'
Oct 10 10:00:01 compute-1 sudo[181158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:01 compute-1 python3.9[181160]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:01 compute-1 sudo[181158]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:02 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934004bb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:02 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:00:02 compute-1 sudo[181283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsevwpgxkmkmtofedsqrjxlzzfpwdxnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090401.3041759-1623-236170911593740/AnsiballZ_copy.py'
Oct 10 10:00:02 compute-1 sudo[181283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:02 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:00:02 compute-1 python3.9[181285]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760090401.3041759-1623-236170911593740/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:02 compute-1 sudo[181283]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:02.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:02 compute-1 sudo[181388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:00:02 compute-1 sudo[181388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:00:02 compute-1 sudo[181388]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:03 compute-1 sudo[181434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:00:03 compute-1 sudo[181434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:00:03 compute-1 sudo[181484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlftqfbvnztmaxaelqczndekdmiuzovp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090402.6463683-1623-227007920265934/AnsiballZ_stat.py'
Oct 10 10:00:03 compute-1 sudo[181484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:03.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:03 compute-1 ceph-mon[79167]: pgmap v415: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:00:03 compute-1 python3.9[181487]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:03 compute-1 sudo[181484]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:03 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:03 compute-1 sudo[181434]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:03 compute-1 sudo[181642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phlcrdgukvdngqadbtafvjuttrtoqsmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090402.6463683-1623-227007920265934/AnsiballZ_copy.py'
Oct 10 10:00:03 compute-1 sudo[181642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:03 compute-1 python3.9[181644]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760090402.6463683-1623-227007920265934/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:03 compute-1 sudo[181642]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:04 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:04 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:04 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:04 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934004bb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:04 compute-1 sudo[181794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyseurqubbepackpqahycewxddbzrtjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090404.0837252-1623-27713314352602/AnsiballZ_stat.py'
Oct 10 10:00:04 compute-1 sudo[181794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:04 compute-1 python3.9[181796]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:04 compute-1 sudo[181794]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:04.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:05 compute-1 sudo[181919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-peiacdarkjkaosqvihhegiaajzgblcer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090404.0837252-1623-27713314352602/AnsiballZ_copy.py'
Oct 10 10:00:05 compute-1 sudo[181919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:00:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:05.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:00:05 compute-1 python3.9[181921]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760090404.0837252-1623-27713314352602/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:05 compute-1 sudo[181919]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:05 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c00c0e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:05 compute-1 ceph-mon[79167]: pgmap v416: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:00:05 compute-1 sudo[182072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksltqnivxuvksvfvrqjvwuoadwymhcff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090405.4512913-1623-216920132632203/AnsiballZ_stat.py'
Oct 10 10:00:05 compute-1 sudo[182072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:05 compute-1 python3.9[182074]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:05 compute-1 sudo[182072]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:06 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:06 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:06 compute-1 sudo[182195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhrmbhutulctndmqcftperasmwtkmijv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090405.4512913-1623-216920132632203/AnsiballZ_copy.py'
Oct 10 10:00:06 compute-1 sudo[182195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:06 compute-1 python3.9[182197]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760090405.4512913-1623-216920132632203/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:06 compute-1 sudo[182195]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:06.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:00:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:07.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:00:07 compute-1 sudo[182347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zplpgujbvozkqpxcaxqgfwwghmggcpmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090406.7927184-1623-235624757740266/AnsiballZ_stat.py'
Oct 10 10:00:07 compute-1 sudo[182347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:00:07 compute-1 python3.9[182349]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:07 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:07 compute-1 sudo[182347]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:07 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:00:07 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:00:07 compute-1 ceph-mon[79167]: pgmap v417: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:00:07 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:00:07 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:00:07 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:00:07 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:00:07 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:00:07 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:00:07 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:00:07 compute-1 sudo[182473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptmfshmvdwsnlodrlvqkeiaudwfyvssk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090406.7927184-1623-235624757740266/AnsiballZ_copy.py'
Oct 10 10:00:07 compute-1 sudo[182473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:08 compute-1 python3.9[182475]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760090406.7927184-1623-235624757740266/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:08 compute-1 sudo[182473]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:08 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c00c0e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:08 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:08 compute-1 sudo[182625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyjqudltvbrullckyjusvnpylgisxkbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090408.258956-1962-150913108739571/AnsiballZ_command.py'
Oct 10 10:00:08 compute-1 sudo[182625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:08.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:08 compute-1 python3.9[182627]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct 10 10:00:08 compute-1 sudo[182625]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:09 compute-1 podman[182629]: 2025-10-10 10:00:09.01960784 +0000 UTC m=+0.119487396 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 10 10:00:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:00:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:09.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:00:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:09 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:09 compute-1 sudo[182803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czifymqmravjwldmokzbgwdkngmbohci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090409.1796508-1989-113627894882843/AnsiballZ_file.py'
Oct 10 10:00:09 compute-1 sudo[182803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:09 compute-1 ceph-mon[79167]: pgmap v418: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:00:09 compute-1 python3.9[182805]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:09 compute-1 sudo[182803]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:10 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934004bb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:10 compute-1 sudo[182955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfqqfbkdkxdgswmjlokavynmxlzgrbql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090409.9111447-1989-195300562519922/AnsiballZ_file.py'
Oct 10 10:00:10 compute-1 sudo[182955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:10 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c00c0e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:10 compute-1 python3.9[182957]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:10 compute-1 sudo[182955]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:10.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:10 compute-1 sudo[183107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaijjudhmoxddtvvuqocaqoeqcpyrbmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090410.6335483-1989-152034025343140/AnsiballZ_file.py'
Oct 10 10:00:10 compute-1 sudo[183107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:00:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:11.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:00:11 compute-1 python3.9[183109]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:11 compute-1 sudo[183107]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:11 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:11 compute-1 sudo[183260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nljbokdsrhpuzutqxwqdngiwferkhuoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090411.3142-1989-35757983105440/AnsiballZ_file.py'
Oct 10 10:00:11 compute-1 ceph-mon[79167]: pgmap v419: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:00:11 compute-1 sudo[183260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:11 compute-1 python3.9[183262]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:11 compute-1 sudo[183260]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:12 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:00:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:12 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934004bb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:12 compute-1 sudo[183412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evqgprvrbsvdhbykelvemioacxpysyih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090412.0060692-1989-46833942691929/AnsiballZ_file.py'
Oct 10 10:00:12 compute-1 sudo[183412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:12 compute-1 python3.9[183414]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:12 compute-1 sudo[183412]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:12 compute-1 sudo[183415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:00:12 compute-1 sudo[183415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:00:12 compute-1 sudo[183415]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:12.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:13.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:13 compute-1 sudo[183589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlssoqzcjgjrbswrrkxjpnmxzlhiplei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090412.7726378-1989-70684228340650/AnsiballZ_file.py'
Oct 10 10:00:13 compute-1 sudo[183589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:13 compute-1 python3.9[183591]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:13 compute-1 sudo[183589]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:13 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f492c00c0e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:13 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:00:13 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:00:13 compute-1 ceph-mon[79167]: pgmap v420: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:00:13 compute-1 sudo[183757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwbwmxxtrafmtjtqigvhumnevorwonob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090413.503401-1989-96662741387043/AnsiballZ_file.py'
Oct 10 10:00:13 compute-1 sudo[183757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:13 compute-1 podman[183716]: 2025-10-10 10:00:13.849282629 +0000 UTC m=+0.085020252 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:00:13 compute-1 sudo[183754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:00:13 compute-1 sudo[183754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:00:13 compute-1 sudo[183754]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:14 compute-1 python3.9[183780]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:14 compute-1 sudo[183757]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:14 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:14 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:14 compute-1 sudo[183937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irkekwdstqsnsfuntwesbjcxypexxkwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090414.217571-1989-254061959222459/AnsiballZ_file.py'
Oct 10 10:00:14 compute-1 sudo[183937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:14 compute-1 python3.9[183939]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:14 compute-1 sudo[183937]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:14.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:15.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:15 compute-1 sudo[184089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngnmfnyywxjbjfiufzncvzwbiogjdtli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090414.876942-1989-279162320176232/AnsiballZ_file.py'
Oct 10 10:00:15 compute-1 sudo[184089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:15 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934004bb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:15 compute-1 python3.9[184091]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:15 compute-1 sudo[184089]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:15 compute-1 ceph-mon[79167]: pgmap v421: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:00:16 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:16 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934004bb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:16 compute-1 sudo[184242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyazqvclchejdmfbugcizkfpkbfcxogd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090415.6260772-1989-12872932364765/AnsiballZ_file.py'
Oct 10 10:00:16 compute-1 sudo[184242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:16 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:16 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:16 compute-1 python3.9[184244]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:16 compute-1 sudo[184242]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:00:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:00:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:16.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:00:17 compute-1 sudo[184394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbctbyhyftokhzghpljvgxezxszvkbsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090416.6711683-1989-244221455288624/AnsiballZ_file.py'
Oct 10 10:00:17 compute-1 sudo[184394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:00:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:17.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:00:17 compute-1 python3.9[184396]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:17 compute-1 sudo[184394]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:00:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:17 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4908004090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:17 compute-1 ceph-mon[79167]: pgmap v422: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:00:17 compute-1 sudo[184548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvydvjzwpsivmtwjsydeprjvdxelxebj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090417.4291012-1989-106933458007785/AnsiballZ_file.py'
Oct 10 10:00:17 compute-1 sudo[184548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:17 compute-1 python3.9[184550]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:18 compute-1 sudo[184548]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:18 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4934004bb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:18 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c001530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:18 compute-1 sudo[184700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-macvqpyqwpvtjuebqemgypirlqbwbolz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090418.2071943-1989-99557336569530/AnsiballZ_file.py'
Oct 10 10:00:18 compute-1 sudo[184700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:18 compute-1 python3.9[184702]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:18 compute-1 sudo[184700]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:18.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:00:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:19.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:00:19 compute-1 sudo[184853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhctncqmfgjujdmbgzjbstskqlgwdwzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090418.9953516-1989-21155454254668/AnsiballZ_file.py'
Oct 10 10:00:19 compute-1 sudo[184853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:19 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:19 compute-1 python3.9[184855]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:19 compute-1 sudo[184853]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:19 compute-1 ceph-mon[79167]: pgmap v423: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 10:00:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:20 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49080040b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:20 compute-1 sudo[185006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztisxabmwiqmblesytjrkubaudjgvkal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090419.8174133-2286-270545624366074/AnsiballZ_stat.py'
Oct 10 10:00:20 compute-1 sudo[185006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:20 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4904002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100020 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:00:20 compute-1 python3.9[185008]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:20 compute-1 sudo[185006]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:20 compute-1 sudo[185129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxnwlqeivekqspcaxmfqptdqmhmavzfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090419.8174133-2286-270545624366074/AnsiballZ_copy.py'
Oct 10 10:00:20 compute-1 sudo[185129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:00:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:20.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:00:21 compute-1 python3.9[185131]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090419.8174133-2286-270545624366074/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:21 compute-1 sudo[185129]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:00:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:21.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:00:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:21 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c001530 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:21 compute-1 sudo[185282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwnmtvnfeumnyxrkuxjzvbktudimkrvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090421.228271-2286-79502027795312/AnsiballZ_stat.py'
Oct 10 10:00:21 compute-1 sudo[185282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:21 compute-1 ceph-mon[79167]: pgmap v424: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:00:21 compute-1 python3.9[185284]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:21 compute-1 sudo[185282]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:22 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:22 compute-1 sudo[185405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aioqhyrpbimsgwsinjguwrskcbuhlsjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090421.228271-2286-79502027795312/AnsiballZ_copy.py'
Oct 10 10:00:22 compute-1 sudo[185405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:00:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:22 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49080040d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:22 compute-1 python3.9[185407]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090421.228271-2286-79502027795312/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:22 compute-1 sudo[185405]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:22.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:22 compute-1 sudo[185557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lusotqqamxjpnfuvsealfozfupjxlqzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090422.678692-2286-265801747247513/AnsiballZ_stat.py'
Oct 10 10:00:22 compute-1 sudo[185557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:23.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:23 compute-1 python3.9[185559]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:23 compute-1 sudo[185557]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:23 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4904002690 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:23 compute-1 sudo[185681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccwnpqedifoqkcrcslewbyvargpdvazg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090422.678692-2286-265801747247513/AnsiballZ_copy.py'
Oct 10 10:00:23 compute-1 sudo[185681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:23 compute-1 ceph-mon[79167]: pgmap v425: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:00:23 compute-1 python3.9[185683]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090422.678692-2286-265801747247513/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:23 compute-1 sudo[185681]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:24 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f491c003280 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:24 compute-1 sudo[185833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbuxdgzpxayuxmjjpginqtzgmvyobvpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090424.0010319-2286-40562018178758/AnsiballZ_stat.py'
Oct 10 10:00:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:24 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4910004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:24 compute-1 sudo[185833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:24 compute-1 python3.9[185835]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:24 compute-1 sudo[185833]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:00:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:24.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:00:24 compute-1 sudo[185956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxntzrwzwnhrofgjgoxhcckxwjuqzjca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090424.0010319-2286-40562018178758/AnsiballZ_copy.py'
Oct 10 10:00:24 compute-1 sudo[185956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:25 compute-1 python3.9[185958]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090424.0010319-2286-40562018178758/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:25 compute-1 sudo[185956]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:25.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[137178]: 10/10/2025 10:00:25 : epoch 68e8d837 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f49080040f0 fd 48 proxy ignored for local
Oct 10 10:00:25 compute-1 kernel: ganesha.nfsd[148040]: segfault at 50 ip 00007f49e966232e sp 00007f49b8ff8210 error 4 in libntirpc.so.5.8[7f49e9647000+2c000] likely on CPU 3 (core 0, socket 3)
Oct 10 10:00:25 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 10:00:25 compute-1 systemd[1]: Started Process Core Dump (PID 186036/UID 0).
Oct 10 10:00:25 compute-1 sudo[186111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alxrmdgrkgmhxqjguwhftatrzgalpmju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090425.2710648-2286-93488743776918/AnsiballZ_stat.py'
Oct 10 10:00:25 compute-1 sudo[186111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:25 compute-1 ceph-mon[79167]: pgmap v426: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:00:25 compute-1 python3.9[186113]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:25 compute-1 sudo[186111]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:26 compute-1 sudo[186234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsazemihsndqeonbcecyijyeksiwifnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090425.2710648-2286-93488743776918/AnsiballZ_copy.py'
Oct 10 10:00:26 compute-1 sudo[186234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:26 compute-1 python3.9[186236]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090425.2710648-2286-93488743776918/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:26 compute-1 sudo[186234]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:26 compute-1 systemd-coredump[186045]: Process 137201 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 59:
                                                    #0  0x00007f49e966232e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 10 10:00:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:00:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:26.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:00:26 compute-1 systemd[1]: systemd-coredump@3-186036-0.service: Deactivated successfully.
Oct 10 10:00:26 compute-1 systemd[1]: systemd-coredump@3-186036-0.service: Consumed 1.369s CPU time.
Oct 10 10:00:26 compute-1 sudo[186391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sudfobfybyeyiiuupkgsmuvybyoehgvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090426.579293-2286-156730524990194/AnsiballZ_stat.py'
Oct 10 10:00:26 compute-1 sudo[186391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:26 compute-1 podman[186390]: 2025-10-10 10:00:26.918634661 +0000 UTC m=+0.047979698 container died f06251c00be534001d35bdb537d404f9774100b5ad0c3caa27f9fd4f4b4dedb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:00:26 compute-1 systemd[1]: var-lib-containers-storage-overlay-e3e619c70174e25189507790d74cd6c583ce379b86dd3dfded0cd49fbdbca08e-merged.mount: Deactivated successfully.
Oct 10 10:00:26 compute-1 podman[186390]: 2025-10-10 10:00:26.961742808 +0000 UTC m=+0.091087845 container remove f06251c00be534001d35bdb537d404f9774100b5ad0c3caa27f9fd4f4b4dedb1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 10 10:00:26 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Main process exited, code=exited, status=139/n/a
Oct 10 10:00:27 compute-1 python3.9[186399]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:27 compute-1 sudo[186391]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:27.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:27 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Failed with result 'exit-code'.
Oct 10 10:00:27 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 2.348s CPU time.
Oct 10 10:00:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:00:27 compute-1 sudo[186559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qirjbtljnhkdiqkjhlfmjtltrujvyhyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090426.579293-2286-156730524990194/AnsiballZ_copy.py'
Oct 10 10:00:27 compute-1 sudo[186559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:27 compute-1 python3.9[186561]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090426.579293-2286-156730524990194/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:27 compute-1 sudo[186559]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:27 compute-1 ceph-mon[79167]: pgmap v427: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:00:28 compute-1 sudo[186711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxsbbjndygpjnbmgoitqejzmwxqdoakc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090427.8587306-2286-233201256705247/AnsiballZ_stat.py'
Oct 10 10:00:28 compute-1 sudo[186711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:28 compute-1 python3.9[186713]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:28 compute-1 sudo[186711]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:00:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:28.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:00:28 compute-1 sudo[186834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muljvjxjzjbpgwpnchufiwidulyqmklk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090427.8587306-2286-233201256705247/AnsiballZ_copy.py'
Oct 10 10:00:28 compute-1 sudo[186834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:29 compute-1 python3.9[186836]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090427.8587306-2286-233201256705247/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:29.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:29 compute-1 sudo[186834]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:29 compute-1 sudo[186987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxpiebcozvstkacaftuvpinqffzidniw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090429.360766-2286-78106258280269/AnsiballZ_stat.py'
Oct 10 10:00:29 compute-1 sudo[186987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:29 compute-1 ceph-mon[79167]: pgmap v428: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:00:29 compute-1 python3.9[186989]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:29 compute-1 sudo[186987]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:30 compute-1 sudo[187110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kptbzntkylpcgtanalemjlnpefajucgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090429.360766-2286-78106258280269/AnsiballZ_copy.py'
Oct 10 10:00:30 compute-1 sudo[187110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:30 compute-1 python3.9[187112]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090429.360766-2286-78106258280269/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:30 compute-1 sudo[187110]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:30.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:31 compute-1 sudo[187262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpuyewfepvvuzoitxpwxwsqarxmkopxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090430.7404933-2286-51877306949992/AnsiballZ_stat.py'
Oct 10 10:00:31 compute-1 sudo[187262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:31.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:31 compute-1 python3.9[187264]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:31 compute-1 sudo[187262]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100031 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:00:31 compute-1 sudo[187386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqsmkjugtnacngdexqisyzcdcpqcsjmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090430.7404933-2286-51877306949992/AnsiballZ_copy.py'
Oct 10 10:00:31 compute-1 sudo[187386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:31 compute-1 ceph-mon[79167]: pgmap v429: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:00:31 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:00:31 compute-1 python3.9[187388]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090430.7404933-2286-51877306949992/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:31 compute-1 sudo[187386]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:00:32 compute-1 sudo[187538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyyrwklsffruvfcixyfznimltwcrcjlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090432.1004825-2286-147900639177621/AnsiballZ_stat.py'
Oct 10 10:00:32 compute-1 sudo[187538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:32 compute-1 python3.9[187540]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:32 compute-1 sudo[187538]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:00:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:32.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:00:33 compute-1 sudo[187661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uipfsuxwrwjfxkqanvnfdoucskpxarwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090432.1004825-2286-147900639177621/AnsiballZ_copy.py'
Oct 10 10:00:33 compute-1 sudo[187661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:33.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:33 compute-1 python3.9[187663]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090432.1004825-2286-147900639177621/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:33 compute-1 sudo[187661]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:33 compute-1 ceph-mon[79167]: pgmap v430: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Oct 10 10:00:33 compute-1 sudo[187814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfpfgggreybhuqndsoikdmkuzqcwrgea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090433.4790075-2286-182413493181529/AnsiballZ_stat.py'
Oct 10 10:00:33 compute-1 sudo[187814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:33 compute-1 sudo[187817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:00:33 compute-1 sudo[187817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:00:33 compute-1 sudo[187817]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:34 compute-1 python3.9[187816]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:34 compute-1 sudo[187814]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:34 compute-1 sudo[187962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujyqosnzikrtafxblbcjzxovoiltrozf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090433.4790075-2286-182413493181529/AnsiballZ_copy.py'
Oct 10 10:00:34 compute-1 sudo[187962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:34 compute-1 python3.9[187964]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090433.4790075-2286-182413493181529/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:34 compute-1 sudo[187962]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:34.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:35 compute-1 sudo[188114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebracqjjxmxpdpenbbbnexhbtgvmgcqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090434.8267438-2286-103925834680979/AnsiballZ_stat.py'
Oct 10 10:00:35 compute-1 sudo[188114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:00:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:35.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:00:35 compute-1 python3.9[188116]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:35 compute-1 sudo[188114]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:35 compute-1 sudo[188238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dldnoqjpwxuxzlwyoyqgjoqkdgvyprur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090434.8267438-2286-103925834680979/AnsiballZ_copy.py'
Oct 10 10:00:35 compute-1 sudo[188238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:35 compute-1 ceph-mon[79167]: pgmap v431: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Oct 10 10:00:35 compute-1 python3.9[188240]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090434.8267438-2286-103925834680979/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:35 compute-1 sudo[188238]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:36 compute-1 sudo[188390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msvfvlacizsjguxwxuxlptxplawzycwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090435.994841-2286-177379231252413/AnsiballZ_stat.py'
Oct 10 10:00:36 compute-1 sudo[188390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:36 compute-1 python3.9[188392]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:36 compute-1 sudo[188390]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:36.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:36 compute-1 sudo[188513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdjfnalktjqnhcuauzrpepbfyoxbnbhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090435.994841-2286-177379231252413/AnsiballZ_copy.py'
Oct 10 10:00:36 compute-1 sudo[188513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:37 compute-1 python3.9[188515]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090435.994841-2286-177379231252413/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:37 compute-1 sudo[188513]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:00:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:37.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:00:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:00:37 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Scheduled restart job, restart counter is at 4.
Oct 10 10:00:37 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:00:37 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 2.348s CPU time.
Oct 10 10:00:37 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 10:00:37 compute-1 sudo[188711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzcobmxolmzswxjkddwqpbekpinsplrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090437.2839243-2286-137998623201377/AnsiballZ_stat.py'
Oct 10 10:00:37 compute-1 sudo[188711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:37 compute-1 podman[188712]: 2025-10-10 10:00:37.617488034 +0000 UTC m=+0.046755456 container create 63df59f99d9151834390e421fa0de6fb0a13455c354e639cfbf83e1dd854998d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:00:37 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917acbe272dbbbe628a7bcaeabbc471431a021cf9fd0a66f48281f67e5a2321a/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 10:00:37 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917acbe272dbbbe628a7bcaeabbc471431a021cf9fd0a66f48281f67e5a2321a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:00:37 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917acbe272dbbbe628a7bcaeabbc471431a021cf9fd0a66f48281f67e5a2321a/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:00:37 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917acbe272dbbbe628a7bcaeabbc471431a021cf9fd0a66f48281f67e5a2321a/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.mssvzx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:00:37 compute-1 podman[188712]: 2025-10-10 10:00:37.596171652 +0000 UTC m=+0.025439094 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:00:37 compute-1 podman[188712]: 2025-10-10 10:00:37.704784075 +0000 UTC m=+0.134051577 container init 63df59f99d9151834390e421fa0de6fb0a13455c354e639cfbf83e1dd854998d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 10 10:00:37 compute-1 podman[188712]: 2025-10-10 10:00:37.711115825 +0000 UTC m=+0.140383277 container start 63df59f99d9151834390e421fa0de6fb0a13455c354e639cfbf83e1dd854998d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:00:37 compute-1 bash[188712]: 63df59f99d9151834390e421fa0de6fb0a13455c354e639cfbf83e1dd854998d
Oct 10 10:00:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:37 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 10:00:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:37 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 10:00:37 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:00:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:37 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 10:00:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:37 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 10:00:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:37 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 10:00:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:37 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 10:00:37 compute-1 python3.9[188720]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:00:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:37 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 10:00:37 compute-1 sudo[188711]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:37 compute-1 ceph-mon[79167]: pgmap v432: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Oct 10 10:00:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:37 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:00:38 compute-1 sudo[188891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccouhqqnpzrneuwebdydezrghmjzlfwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090437.2839243-2286-137998623201377/AnsiballZ_copy.py'
Oct 10 10:00:38 compute-1 sudo[188891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:38 compute-1 python3.9[188893]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090437.2839243-2286-137998623201377/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:38 compute-1 sudo[188891]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:00:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:38.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:00:39 compute-1 python3.9[189043]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:00:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:39.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:39 compute-1 ceph-mon[79167]: pgmap v433: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Oct 10 10:00:39 compute-1 sudo[189211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orpyngsifvsugkybhflpbmmhzdngrcvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090439.452487-2904-74572965922436/AnsiballZ_seboolean.py'
Oct 10 10:00:39 compute-1 sudo[189211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:39 compute-1 podman[189171]: 2025-10-10 10:00:39.93641459 +0000 UTC m=+0.088842424 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true)
Oct 10 10:00:40 compute-1 python3.9[189216]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct 10 10:00:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:40.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:41 compute-1 sudo[189211]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:41.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:41 compute-1 sudo[189383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-attrutmiuyjpdrewmjqitdaaqfhtrywj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090441.5463781-2928-234149088653212/AnsiballZ_copy.py'
Oct 10 10:00:41 compute-1 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Oct 10 10:00:41 compute-1 sudo[189383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:41 compute-1 ceph-mon[79167]: pgmap v434: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Oct 10 10:00:41 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Oct 10 10:00:41 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:00:41.921210) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 10:00:41 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Oct 10 10:00:41 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090441921260, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 4656, "num_deletes": 502, "total_data_size": 12891337, "memory_usage": 13062144, "flush_reason": "Manual Compaction"}
Oct 10 10:00:41 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Oct 10 10:00:41 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090441959017, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 8357485, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13278, "largest_seqno": 17929, "table_properties": {"data_size": 8339729, "index_size": 12010, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4677, "raw_key_size": 36450, "raw_average_key_size": 19, "raw_value_size": 8303208, "raw_average_value_size": 4480, "num_data_blocks": 525, "num_entries": 1853, "num_filter_entries": 1853, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089994, "oldest_key_time": 1760089994, "file_creation_time": 1760090441, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:00:41 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 37848 microseconds, and 16367 cpu microseconds.
Oct 10 10:00:41 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:00:41 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:00:41.959064) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 8357485 bytes OK
Oct 10 10:00:41 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:00:41.959085) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Oct 10 10:00:41 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:00:41.960288) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Oct 10 10:00:41 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:00:41.960304) EVENT_LOG_v1 {"time_micros": 1760090441960299, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 10:00:41 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:00:41.960339) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 10:00:41 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 12870831, prev total WAL file size 12870831, number of live WAL files 2.
Oct 10 10:00:41 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:00:41 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:00:41.963238) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Oct 10 10:00:41 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 10:00:41 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(8161KB)], [27(12MB)]
Oct 10 10:00:41 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090441963289, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 21059165, "oldest_snapshot_seqno": -1}
Oct 10 10:00:42 compute-1 python3.9[189385]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:42 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 5073 keys, 15514300 bytes, temperature: kUnknown
Oct 10 10:00:42 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090442054500, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 15514300, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15475740, "index_size": 24754, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12741, "raw_key_size": 126897, "raw_average_key_size": 25, "raw_value_size": 15379141, "raw_average_value_size": 3031, "num_data_blocks": 1042, "num_entries": 5073, "num_filter_entries": 5073, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760090441, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:00:42 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:00:42 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:00:42.054685) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 15514300 bytes
Oct 10 10:00:42 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:00:42.055975) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 230.8 rd, 170.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(8.0, 12.1 +0.0 blob) out(14.8 +0.0 blob), read-write-amplify(4.4) write-amplify(1.9) OK, records in: 6095, records dropped: 1022 output_compression: NoCompression
Oct 10 10:00:42 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:00:42.055991) EVENT_LOG_v1 {"time_micros": 1760090442055983, "job": 14, "event": "compaction_finished", "compaction_time_micros": 91260, "compaction_time_cpu_micros": 52399, "output_level": 6, "num_output_files": 1, "total_output_size": 15514300, "num_input_records": 6095, "num_output_records": 5073, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 10:00:42 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:00:42 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090442057387, "job": 14, "event": "table_file_deletion", "file_number": 29}
Oct 10 10:00:42 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:00:42 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090442059647, "job": 14, "event": "table_file_deletion", "file_number": 27}
Oct 10 10:00:42 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:00:41.963126) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:00:42 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:00:42.059758) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:00:42 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:00:42.059766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:00:42 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:00:42.059770) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:00:42 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:00:42.059774) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:00:42 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:00:42.059778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:00:42 compute-1 sudo[189383]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:00:42.193 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:00:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:00:42.194 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:00:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:00:42.194 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:00:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:00:42 compute-1 sudo[189535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjqdqyqulmmuaalxxjlvtyjitpanbawg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090442.2228818-2928-85197133203224/AnsiballZ_copy.py'
Oct 10 10:00:42 compute-1 sudo[189535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:42 compute-1 python3.9[189537]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:42 compute-1 sudo[189535]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:42.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:43.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:43 compute-1 sudo[189688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsbbqmkixdpbwwrvppsdqdybpphtyntq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090443.0048523-2928-206128841631565/AnsiballZ_copy.py'
Oct 10 10:00:43 compute-1 sudo[189688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:43 compute-1 python3.9[189690]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:43 compute-1 sudo[189688]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:43 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Oct 10 10:00:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:43 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Oct 10 10:00:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:43 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:00:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:43 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:00:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:43 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:00:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:43 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:00:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:43 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:00:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:43 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:00:43 compute-1 ceph-mon[79167]: pgmap v435: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Oct 10 10:00:43 compute-1 podman[189779]: 2025-10-10 10:00:43.999407932 +0000 UTC m=+0.079662148 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 10 10:00:44 compute-1 sudo[189857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agmtjmuxggrxdlhgrewrszzcokprwbuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090443.752862-2928-258404254436620/AnsiballZ_copy.py'
Oct 10 10:00:44 compute-1 sudo[189857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:44 compute-1 python3.9[189859]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:44 compute-1 sudo[189857]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:44 compute-1 sudo[190009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqprvzfnbpktkecggwgilddgkzicmdjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090444.4784095-2928-275494496123343/AnsiballZ_copy.py'
Oct 10 10:00:44 compute-1 sudo[190009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:00:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:44.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:00:45 compute-1 python3.9[190011]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:45 compute-1 sudo[190009]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:45.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:45 compute-1 sudo[190162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkzhxuultuxklzizkdmucskhhwzbxvpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090445.2816768-3036-109170795208843/AnsiballZ_copy.py'
Oct 10 10:00:45 compute-1 sudo[190162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:45 compute-1 python3.9[190164]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:45 compute-1 sudo[190162]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:45 compute-1 ceph-mon[79167]: pgmap v436: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Oct 10 10:00:46 compute-1 sudo[190314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iicfdpdhympxncibufvcwzkvgmtcymyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090445.9896035-3036-165327288610105/AnsiballZ_copy.py'
Oct 10 10:00:46 compute-1 sudo[190314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:46 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100046 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:00:46 compute-1 python3.9[190316]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:46 compute-1 sudo[190314]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:46.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:46 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:00:47 compute-1 auditd[702]: Audit daemon rotating log files
Oct 10 10:00:47 compute-1 sudo[190466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bknrfrdnqmasguloyjyhiaafmfdkypln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090446.7568913-3036-186374406690817/AnsiballZ_copy.py'
Oct 10 10:00:47 compute-1 sudo[190466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:47.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:47 compute-1 python3.9[190468]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:47 compute-1 sudo[190466]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:00:47 compute-1 sudo[190619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaonqdzrvfxrkdhzrewjmrujbphgcude ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090447.4371474-3036-68117659511149/AnsiballZ_copy.py'
Oct 10 10:00:47 compute-1 sudo[190619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:47 compute-1 python3.9[190621]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:47 compute-1 sudo[190619]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:47 compute-1 ceph-mon[79167]: pgmap v437: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Oct 10 10:00:48 compute-1 sudo[190771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eymofjarriossvhybxwzghebmxksqxtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090448.174481-3036-106762680088991/AnsiballZ_copy.py'
Oct 10 10:00:48 compute-1 sudo[190771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:48 compute-1 python3.9[190773]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:48 compute-1 sudo[190771]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:48.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:00:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:49.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:00:49 compute-1 sudo[190924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqpzwlvhrnqtczzsurzfredonbtpgmii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090449.0420308-3144-125654403864007/AnsiballZ_systemd.py'
Oct 10 10:00:49 compute-1 sudo[190924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:49 compute-1 python3.9[190926]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 10:00:49 compute-1 systemd[1]: Reloading.
Oct 10 10:00:49 compute-1 systemd-rc-local-generator[190956]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:00:49 compute-1 systemd-sysv-generator[190959]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:00:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000010:nfs.cephfs.0: -2
Oct 10 10:00:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:00:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 10:00:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 10:00:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 10:00:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 10:00:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 10:00:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 10:00:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:00:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:00:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:00:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 10:00:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:00:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 10:00:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 10:00:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 10:00:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 10:00:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 10:00:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 10:00:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 10:00:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 10:00:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 10:00:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 10:00:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 10:00:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 10:00:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 10:00:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 10:00:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:49 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 10:00:50 compute-1 ceph-mon[79167]: pgmap v438: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Oct 10 10:00:50 compute-1 systemd[1]: Starting libvirt logging daemon socket...
Oct 10 10:00:50 compute-1 systemd[1]: Listening on libvirt logging daemon socket.
Oct 10 10:00:50 compute-1 systemd[1]: Starting libvirt logging daemon admin socket...
Oct 10 10:00:50 compute-1 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct 10 10:00:50 compute-1 systemd[1]: Starting libvirt logging daemon...
Oct 10 10:00:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:50 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa0f0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:50 compute-1 systemd[1]: Started libvirt logging daemon.
Oct 10 10:00:50 compute-1 sudo[190924]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:50 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa0ec001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:50 compute-1 sudo[191134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyuzxxbrkdeloljbncmbtkgzklravnfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090450.499844-3144-44976124810607/AnsiballZ_systemd.py'
Oct 10 10:00:50 compute-1 sudo[191134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:00:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:50.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:00:51 compute-1 python3.9[191136]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 10:00:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:51.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:51 compute-1 systemd[1]: Reloading.
Oct 10 10:00:51 compute-1 systemd-rc-local-generator[191162]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:00:51 compute-1 systemd-sysv-generator[191167]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:00:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:51 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa0c8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:51 compute-1 systemd[1]: Starting libvirt nodedev daemon socket...
Oct 10 10:00:51 compute-1 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct 10 10:00:51 compute-1 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct 10 10:00:51 compute-1 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct 10 10:00:51 compute-1 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct 10 10:00:51 compute-1 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct 10 10:00:51 compute-1 systemd[1]: Starting libvirt nodedev daemon...
Oct 10 10:00:51 compute-1 systemd[1]: Started libvirt nodedev daemon.
Oct 10 10:00:51 compute-1 sudo[191134]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:52 compute-1 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct 10 10:00:52 compute-1 ceph-mon[79167]: pgmap v439: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Oct 10 10:00:52 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:52 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa0f0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:52 compute-1 sudo[191351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqpncajmrdnoqyhcfqhlvavtpfdonpeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090451.8363843-3144-131151012657173/AnsiballZ_systemd.py'
Oct 10 10:00:52 compute-1 sudo[191351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:00:52 compute-1 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct 10 10:00:52 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:52 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa0f0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:52 compute-1 python3.9[191353]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 10:00:52 compute-1 systemd[1]: Reloading.
Oct 10 10:00:52 compute-1 systemd-rc-local-generator[191385]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:00:52 compute-1 systemd-sysv-generator[191391]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:00:52 compute-1 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Oct 10 10:00:52 compute-1 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct 10 10:00:52 compute-1 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct 10 10:00:52 compute-1 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct 10 10:00:52 compute-1 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct 10 10:00:52 compute-1 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct 10 10:00:52 compute-1 systemd[1]: Starting libvirt proxy daemon...
Oct 10 10:00:52 compute-1 systemd[1]: Started libvirt proxy daemon.
Oct 10 10:00:52 compute-1 sudo[191351]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:52.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:53.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:53 compute-1 sudo[191570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asilrlnedievnvhrsnovnyunilrulaxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090453.022691-3144-217493607011768/AnsiballZ_systemd.py'
Oct 10 10:00:53 compute-1 sudo[191570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100053 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:00:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:53 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa0ec0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:53 compute-1 setroubleshoot[191303]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l e878a9cd-65b7-41e1-ab4d-cb9e8b771563
Oct 10 10:00:53 compute-1 python3.9[191572]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 10:00:53 compute-1 setroubleshoot[191303]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Oct 10 10:00:53 compute-1 setroubleshoot[191303]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l e878a9cd-65b7-41e1-ab4d-cb9e8b771563
Oct 10 10:00:53 compute-1 setroubleshoot[191303]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Oct 10 10:00:53 compute-1 systemd[1]: Reloading.
Oct 10 10:00:53 compute-1 systemd-rc-local-generator[191595]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:00:53 compute-1 systemd-sysv-generator[191602]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:00:54 compute-1 systemd[1]: Listening on libvirt locking daemon socket.
Oct 10 10:00:54 compute-1 systemd[1]: Starting libvirt QEMU daemon socket...
Oct 10 10:00:54 compute-1 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct 10 10:00:54 compute-1 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct 10 10:00:54 compute-1 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct 10 10:00:54 compute-1 ceph-mon[79167]: pgmap v440: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Oct 10 10:00:54 compute-1 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct 10 10:00:54 compute-1 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct 10 10:00:54 compute-1 sudo[191610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:00:54 compute-1 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct 10 10:00:54 compute-1 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct 10 10:00:54 compute-1 sudo[191610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:00:54 compute-1 sudo[191610]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:54 compute-1 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct 10 10:00:54 compute-1 systemd[1]: Starting libvirt QEMU daemon...
Oct 10 10:00:54 compute-1 systemd[1]: Started libvirt QEMU daemon.
Oct 10 10:00:54 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:54 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa0c80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:54 compute-1 sudo[191570]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:54 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:54 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa0f0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:54 compute-1 sudo[191809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-joilwdneopukekeawduhlwabvpoglblh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090454.3338537-3144-44753085169993/AnsiballZ_systemd.py'
Oct 10 10:00:54 compute-1 sudo[191809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:54.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:54 compute-1 python3.9[191811]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 10:00:54 compute-1 systemd[1]: Reloading.
Oct 10 10:00:55 compute-1 systemd-rc-local-generator[191840]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:00:55 compute-1 systemd-sysv-generator[191843]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:00:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:55.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:55 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa0f0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:55 compute-1 systemd[1]: Starting libvirt secret daemon socket...
Oct 10 10:00:55 compute-1 systemd[1]: Listening on libvirt secret daemon socket.
Oct 10 10:00:55 compute-1 systemd[1]: Starting libvirt secret daemon admin socket...
Oct 10 10:00:55 compute-1 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct 10 10:00:55 compute-1 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct 10 10:00:55 compute-1 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct 10 10:00:55 compute-1 systemd[1]: Starting libvirt secret daemon...
Oct 10 10:00:55 compute-1 systemd[1]: Started libvirt secret daemon.
Oct 10 10:00:55 compute-1 sudo[191809]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:56 compute-1 ceph-mon[79167]: pgmap v441: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 597 B/s wr, 2 op/s
Oct 10 10:00:56 compute-1 sudo[192021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhpepbabxfwpcwqthvyqhdhtdbcqnqhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090455.8067634-3255-106288585751191/AnsiballZ_file.py'
Oct 10 10:00:56 compute-1 sudo[192021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:56 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:56 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa0ec0023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:56 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:56 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa0c80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:00:56 compute-1 python3.9[192023]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:00:56 compute-1 sudo[192021]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:56 compute-1 sudo[192173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sphznlzzuivebxqlvfildatbfnawqovf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090456.5417807-3279-211105618523027/AnsiballZ_find.py'
Oct 10 10:00:56 compute-1 sudo[192173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:00:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:56.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:00:57 compute-1 python3.9[192175]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 10 10:00:57 compute-1 sudo[192173]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:57 compute-1 ceph-mon[79167]: pgmap v442: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 597 B/s wr, 2 op/s
Oct 10 10:00:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:57.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:00:57 compute-1 kernel: ganesha.nfsd[190969]: segfault at 50 ip 00007fa19e31632e sp 00007fa1637fd210 error 4 in libntirpc.so.5.8[7fa19e2fb000+2c000] likely on CPU 2 (core 0, socket 2)
Oct 10 10:00:57 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 10:00:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[188729]: 10/10/2025 10:00:57 : epoch 68e8d945 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa0f0000df0 fd 38 proxy ignored for local
Oct 10 10:00:57 compute-1 systemd[1]: Started Process Core Dump (PID 192201/UID 0).
Oct 10 10:00:57 compute-1 sudo[192328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkpzbitfkctsiodhgfopnegwhjvbgcmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090457.5891588-3303-208128933853429/AnsiballZ_command.py'
Oct 10 10:00:57 compute-1 sudo[192328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:00:58 compute-1 python3.9[192330]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:00:58 compute-1 sudo[192328]: pam_unix(sudo:session): session closed for user root
Oct 10 10:00:58 compute-1 systemd-coredump[192202]: Process 188733 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 45:
                                                    #0  0x00007fa19e31632e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 10 10:00:58 compute-1 systemd[1]: systemd-coredump@4-192201-0.service: Deactivated successfully.
Oct 10 10:00:58 compute-1 podman[192363]: 2025-10-10 10:00:58.454745905 +0000 UTC m=+0.046141210 container died 63df59f99d9151834390e421fa0de6fb0a13455c354e639cfbf83e1dd854998d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 10 10:00:58 compute-1 systemd[1]: var-lib-containers-storage-overlay-917acbe272dbbbe628a7bcaeabbc471431a021cf9fd0a66f48281f67e5a2321a-merged.mount: Deactivated successfully.
Oct 10 10:00:58 compute-1 podman[192363]: 2025-10-10 10:00:58.518052992 +0000 UTC m=+0.109448297 container remove 63df59f99d9151834390e421fa0de6fb0a13455c354e639cfbf83e1dd854998d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Oct 10 10:00:58 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Main process exited, code=exited, status=139/n/a
Oct 10 10:00:58 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Failed with result 'exit-code'.
Oct 10 10:00:58 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 1.412s CPU time.
Oct 10 10:00:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:00:58.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:59 compute-1 python3.9[192532]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 10 10:00:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:00:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:00:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:00:59.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:00:59 compute-1 ceph-mon[79167]: pgmap v443: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 2 op/s
Oct 10 10:01:00 compute-1 python3.9[192683]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:01:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:00.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:01 compute-1 python3.9[192804]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090459.4863524-3360-43501129050237/.source.xml follow=False _original_basename=secret.xml.j2 checksum=baa25a2f67c100fe0cd0e069ccc25ef935446dd6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:01.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:01 compute-1 ceph-mon[79167]: pgmap v444: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:01:01 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:01:01 compute-1 CROND[192955]: (root) CMD (run-parts /etc/cron.hourly)
Oct 10 10:01:01 compute-1 run-parts[192960]: (/etc/cron.hourly) starting 0anacron
Oct 10 10:01:01 compute-1 sudo[192957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyjbxdoqtgyakkkcmzoohuaionorpiui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090461.326136-3405-124988364226315/AnsiballZ_command.py'
Oct 10 10:01:01 compute-1 sudo[192957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:01 compute-1 anacron[192970]: Anacron started on 2025-10-10
Oct 10 10:01:01 compute-1 anacron[192970]: Will run job `cron.daily' in 33 min.
Oct 10 10:01:01 compute-1 anacron[192970]: Will run job `cron.weekly' in 53 min.
Oct 10 10:01:01 compute-1 anacron[192970]: Will run job `cron.monthly' in 73 min.
Oct 10 10:01:01 compute-1 anacron[192970]: Jobs will be executed sequentially
Oct 10 10:01:01 compute-1 run-parts[192972]: (/etc/cron.hourly) finished 0anacron
Oct 10 10:01:01 compute-1 CROND[192951]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 10 10:01:01 compute-1 python3.9[192968]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 21f084a3-af34-5230-afe4-ea5cd24a55f4
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:01:01 compute-1 polkitd[6374]: Registered Authentication Agent for unix-process:192974:337567 (system bus name :1.2003 [/usr/bin/pkttyagent --process 192974 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 10 10:01:01 compute-1 polkitd[6374]: Unregistered Authentication Agent for unix-process:192974:337567 (system bus name :1.2003, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 10 10:01:02 compute-1 polkitd[6374]: Registered Authentication Agent for unix-process:192973:337567 (system bus name :1.2004 [/usr/bin/pkttyagent --process 192973 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 10 10:01:02 compute-1 polkitd[6374]: Unregistered Authentication Agent for unix-process:192973:337567 (system bus name :1.2004, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 10 10:01:02 compute-1 sudo[192957]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:01:02 compute-1 python3.9[193134]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:02.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:03.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100103 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:01:03 compute-1 sudo[193285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwlxxbwvfhungmpiqdidnhxppnryplxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090463.1079023-3453-58064539921379/AnsiballZ_command.py'
Oct 10 10:01:03 compute-1 sudo[193285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:03 compute-1 ceph-mon[79167]: pgmap v445: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:01:03 compute-1 sudo[193285]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:03 compute-1 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct 10 10:01:03 compute-1 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct 10 10:01:04 compute-1 sudo[193438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrihgytbxgbtxrvvqpgoakpqzlzsffaq ; FSID=21f084a3-af34-5230-afe4-ea5cd24a55f4 KEY=AQAP1ehoAAAAABAAt8v7pISuvMofUPTRybMptA== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090463.9624815-3477-274706223767547/AnsiballZ_command.py'
Oct 10 10:01:04 compute-1 sudo[193438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:04 compute-1 polkitd[6374]: Registered Authentication Agent for unix-process:193441:337836 (system bus name :1.2007 [/usr/bin/pkttyagent --process 193441 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 10 10:01:04 compute-1 polkitd[6374]: Unregistered Authentication Agent for unix-process:193441:337836 (system bus name :1.2007, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 10 10:01:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:04.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:05.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:05 compute-1 ceph-mon[79167]: pgmap v446: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:01:05 compute-1 sudo[193438]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:06 compute-1 sudo[193597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnbhjjbehvlvcmcjlbnevodjvxtuqviu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090465.9059644-3501-1821683172168/AnsiballZ_copy.py'
Oct 10 10:01:06 compute-1 sudo[193597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:06 compute-1 python3.9[193599]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:06 compute-1 sudo[193597]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:01:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:06.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:01:07 compute-1 sudo[193749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdmiixiscagoroxpwhhmkvpeapqbdddc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090466.7360697-3525-192718964490057/AnsiballZ_stat.py'
Oct 10 10:01:07 compute-1 sudo[193749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:07.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:01:07 compute-1 python3.9[193751]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:01:07 compute-1 sudo[193749]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:07 compute-1 ceph-mon[79167]: pgmap v447: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:01:07 compute-1 sudo[193873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmwuhhujyamywlnnkletkjpscqnmyobs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090466.7360697-3525-192718964490057/AnsiballZ_copy.py'
Oct 10 10:01:07 compute-1 sudo[193873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:07 compute-1 python3.9[193875]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090466.7360697-3525-192718964490057/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:08 compute-1 sudo[193873]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:08 compute-1 sudo[194025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odgfsmmxhjmbqbqdfhmvixzvuxyaaewx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090468.3613145-3573-172405572534986/AnsiballZ_file.py'
Oct 10 10:01:08 compute-1 sudo[194025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:08 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Scheduled restart job, restart counter is at 5.
Oct 10 10:01:08 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:01:08 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 1.412s CPU time.
Oct 10 10:01:08 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 10:01:08 compute-1 python3.9[194027]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:08.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:08 compute-1 sudo[194025]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100109 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:01:09 compute-1 podman[194113]: 2025-10-10 10:01:09.177978841 +0000 UTC m=+0.049732245 container create e9ca41b00a4508e80f57e571972a3f5c37c766e07b204ef1aea54156f1cf77a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Oct 10 10:01:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:01:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:09.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:01:09 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/035d9430a00654dc757673aa7252531a3689382a674128a5bc47736d6c3eeaae/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 10:01:09 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/035d9430a00654dc757673aa7252531a3689382a674128a5bc47736d6c3eeaae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:01:09 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/035d9430a00654dc757673aa7252531a3689382a674128a5bc47736d6c3eeaae/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:01:09 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/035d9430a00654dc757673aa7252531a3689382a674128a5bc47736d6c3eeaae/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.mssvzx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:01:09 compute-1 podman[194113]: 2025-10-10 10:01:09.1574801 +0000 UTC m=+0.029233524 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:01:09 compute-1 podman[194113]: 2025-10-10 10:01:09.257619227 +0000 UTC m=+0.129372641 container init e9ca41b00a4508e80f57e571972a3f5c37c766e07b204ef1aea54156f1cf77a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 10 10:01:09 compute-1 podman[194113]: 2025-10-10 10:01:09.27487582 +0000 UTC m=+0.146629224 container start e9ca41b00a4508e80f57e571972a3f5c37c766e07b204ef1aea54156f1cf77a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:01:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:09 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 10:01:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:09 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 10:01:09 compute-1 bash[194113]: e9ca41b00a4508e80f57e571972a3f5c37c766e07b204ef1aea54156f1cf77a8
Oct 10 10:01:09 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:01:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:09 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 10:01:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:09 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 10:01:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:09 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 10:01:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:09 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 10:01:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:09 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 10:01:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:09 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:01:09 compute-1 sudo[194279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlwewixkktiswteodbamhyijhmwstnho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090469.3316581-3597-49773931768577/AnsiballZ_stat.py'
Oct 10 10:01:09 compute-1 sudo[194279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:09 compute-1 ceph-mon[79167]: pgmap v448: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:01:09 compute-1 python3.9[194281]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:01:09 compute-1 sudo[194279]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:10 compute-1 sudo[194372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mndazvsdtydndysznxybeolcxdpgcqjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090469.3316581-3597-49773931768577/AnsiballZ_file.py'
Oct 10 10:01:10 compute-1 sudo[194372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:10 compute-1 podman[194331]: 2025-10-10 10:01:10.242880237 +0000 UTC m=+0.100999040 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Oct 10 10:01:10 compute-1 python3.9[194379]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:10 compute-1 sudo[194372]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:10.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:10 compute-1 sudo[194532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djwcniruadffpxfevsjyljdvfjsaifam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090470.6225555-3633-275709786771482/AnsiballZ_stat.py'
Oct 10 10:01:10 compute-1 sudo[194532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:11 compute-1 python3.9[194534]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:01:11 compute-1 sudo[194532]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:01:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:11.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:01:11 compute-1 sudo[194611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mezskhcutklekasrwwpfdapnxzbcikfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090470.6225555-3633-275709786771482/AnsiballZ_file.py'
Oct 10 10:01:11 compute-1 sudo[194611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:11 compute-1 python3.9[194613]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.ukm_xrch recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:11 compute-1 sudo[194611]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:11 compute-1 ceph-mon[79167]: pgmap v449: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Oct 10 10:01:12 compute-1 sudo[194763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgiixlzxhobyppezhnhqapossvsquspi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090471.8924491-3669-69367523204958/AnsiballZ_stat.py'
Oct 10 10:01:12 compute-1 sudo[194763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:01:12 compute-1 python3.9[194765]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:01:12 compute-1 sudo[194763]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:12 compute-1 sudo[194841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojtcuvuojufxstnsdtdszufwtvoqwolz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090471.8924491-3669-69367523204958/AnsiballZ_file.py'
Oct 10 10:01:12 compute-1 sudo[194841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:12 compute-1 sudo[194842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:01:12 compute-1 sudo[194842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:01:12 compute-1 sudo[194842]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:12.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:12 compute-1 sudo[194869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:01:12 compute-1 sudo[194869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:01:13 compute-1 python3.9[194856]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:13 compute-1 sudo[194841]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:01:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:13.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:01:13 compute-1 sudo[194869]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:13 compute-1 sudo[195075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzgzoxtadidbngzaxdhwazyjlzrhqqhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090473.3937218-3708-149869460092326/AnsiballZ_command.py'
Oct 10 10:01:13 compute-1 sudo[195075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:13 compute-1 ceph-mon[79167]: pgmap v450: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:01:13 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:01:13 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:01:13 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:01:13 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:01:13 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:01:13 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:01:13 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:01:13 compute-1 python3.9[195077]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:01:14 compute-1 sudo[195075]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:14 compute-1 sudo[195103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:01:14 compute-1 sudo[195103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:01:14 compute-1 sudo[195103]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:14 compute-1 podman[195127]: 2025-10-10 10:01:14.272825562 +0000 UTC m=+0.061878090 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:01:14 compute-1 sudo[195273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nklixgvbolmpbbkbojwujuorrsvnyjkc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760090474.239345-3732-66508580833554/AnsiballZ_edpm_nftables_from_files.py'
Oct 10 10:01:14 compute-1 sudo[195273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:14.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:14 compute-1 python3[195275]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 10 10:01:15 compute-1 sudo[195273]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:15.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:15 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:01:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:15 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:01:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:15 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:01:15 compute-1 sudo[195426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxshiwwjesmrgnzxixgxqscbdzcmrqfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090475.2163434-3756-56831101979143/AnsiballZ_stat.py'
Oct 10 10:01:15 compute-1 sudo[195426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:15 compute-1 ceph-mon[79167]: pgmap v451: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:01:15 compute-1 python3.9[195428]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:01:15 compute-1 sudo[195426]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:15 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:01:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:15 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:01:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:15 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:01:16 compute-1 sudo[195504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wayfxscmrvqbcvbhptjqgeezmlvotsgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090475.2163434-3756-56831101979143/AnsiballZ_file.py'
Oct 10 10:01:16 compute-1 sudo[195504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:16 compute-1 python3.9[195506]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:16 compute-1 sudo[195504]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:01:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:01:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:16.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:01:17 compute-1 sudo[195656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgowihmeunpoyzlzudueetaqobwuekji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090476.727683-3792-121672428087898/AnsiballZ_stat.py'
Oct 10 10:01:17 compute-1 sudo[195656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:17.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:01:17 compute-1 python3.9[195658]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:01:17 compute-1 sudo[195656]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:17 compute-1 sudo[195735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djzzyzplperaeuytqxrqzcykwhhnuztu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090476.727683-3792-121672428087898/AnsiballZ_file.py'
Oct 10 10:01:17 compute-1 sudo[195735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:17 compute-1 ceph-mon[79167]: pgmap v452: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:01:17 compute-1 python3.9[195737]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:17 compute-1 sudo[195735]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:18 compute-1 sudo[195887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grvficxaeruvhsltftxxclikrodvuatu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090478.1592104-3828-45464588492053/AnsiballZ_stat.py'
Oct 10 10:01:18 compute-1 sudo[195887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:18 compute-1 python3.9[195889]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:01:18 compute-1 sudo[195887]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:18 compute-1 sudo[195892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:01:18 compute-1 sudo[195892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:01:18 compute-1 sudo[195892]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:01:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:18.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:01:19 compute-1 sudo[195990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvfzutbzeddakwmzvbudklqaywijxqaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090478.1592104-3828-45464588492053/AnsiballZ_file.py'
Oct 10 10:01:19 compute-1 sudo[195990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:01:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:19.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:01:19 compute-1 python3.9[195992]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:19 compute-1 sudo[195990]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:19 compute-1 ceph-mon[79167]: pgmap v453: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Oct 10 10:01:19 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:01:19 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:01:19 compute-1 sudo[196143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psezevxdyxvgovcgcqtudnnwcnlmneuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090479.5206788-3864-107679809122000/AnsiballZ_stat.py'
Oct 10 10:01:19 compute-1 sudo[196143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:20 compute-1 python3.9[196145]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:01:20 compute-1 sudo[196143]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:20 compute-1 sudo[196221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kltceqlxxyxyaopmzeustibmzfzqzhxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090479.5206788-3864-107679809122000/AnsiballZ_file.py'
Oct 10 10:01:20 compute-1 sudo[196221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:20 compute-1 python3.9[196223]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:20 compute-1 sudo[196221]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:20.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:21.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:21 compute-1 sudo[196373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnlxpkukiavujbrfvsvfgrfjhxhmafvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090480.7944658-3900-190061166664170/AnsiballZ_stat.py'
Oct 10 10:01:21 compute-1 sudo[196373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:21 compute-1 python3.9[196375]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:01:21 compute-1 sudo[196373]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:21 compute-1 ceph-mon[79167]: pgmap v454: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Oct 10 10:01:21 compute-1 sudo[196499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gspctnljswrehjlayrxhkmgptcekcyih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090480.7944658-3900-190061166664170/AnsiballZ_copy.py'
Oct 10 10:01:21 compute-1 sudo[196499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:01:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 10:01:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 10:01:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 10:01:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 10:01:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 10:01:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 10:01:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:01:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:01:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:01:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 10:01:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:01:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 10:01:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 10:01:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 10:01:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 10:01:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 10:01:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 10:01:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 10:01:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 10:01:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 10:01:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 10:01:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 10:01:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 10:01:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 10:01:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 10:01:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 10:01:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:21 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:01:22 compute-1 python3.9[196501]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760090480.7944658-3900-190061166664170/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:22 compute-1 sudo[196499]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:22 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff408000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:01:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:22 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3fc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:22 compute-1 sudo[196667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqcuxwnvjdnxwqlckoivwotcmublvzcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090482.3562407-3945-148278823866033/AnsiballZ_file.py'
Oct 10 10:01:22 compute-1 sudo[196667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:22.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:22 compute-1 python3.9[196669]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:22 compute-1 sudo[196667]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:01:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:23.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:01:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:23 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3e4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:23 compute-1 sudo[196820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kidorfisohgzqryzzaskndemhzgpmdny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090483.1724606-3969-226195808506315/AnsiballZ_command.py'
Oct 10 10:01:23 compute-1 sudo[196820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:23 compute-1 ceph-mon[79167]: pgmap v455: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.5 KiB/s wr, 5 op/s
Oct 10 10:01:23 compute-1 python3.9[196822]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:01:23.789268) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090483789367, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 656, "num_deletes": 252, "total_data_size": 1233775, "memory_usage": 1252280, "flush_reason": "Manual Compaction"}
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090483796516, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 571764, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17934, "largest_seqno": 18585, "table_properties": {"data_size": 568864, "index_size": 872, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7595, "raw_average_key_size": 19, "raw_value_size": 562820, "raw_average_value_size": 1481, "num_data_blocks": 38, "num_entries": 380, "num_filter_entries": 380, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760090442, "oldest_key_time": 1760090442, "file_creation_time": 1760090483, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 7307 microseconds, and 4457 cpu microseconds.
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:01:23.796580) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 571764 bytes OK
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:01:23.796604) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:01:23.797822) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:01:23.797845) EVENT_LOG_v1 {"time_micros": 1760090483797838, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:01:23.797866) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1230155, prev total WAL file size 1230155, number of live WAL files 2.
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:01:23.798701) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353034' seq:0, type:0; will stop at (end)
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(558KB)], [30(14MB)]
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090483798763, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 16086064, "oldest_snapshot_seqno": -1}
Oct 10 10:01:23 compute-1 sudo[196820]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 4951 keys, 12217286 bytes, temperature: kUnknown
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090483877479, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 12217286, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12183668, "index_size": 20132, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12421, "raw_key_size": 124747, "raw_average_key_size": 25, "raw_value_size": 12093240, "raw_average_value_size": 2442, "num_data_blocks": 840, "num_entries": 4951, "num_filter_entries": 4951, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760090483, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:01:23.877804) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 12217286 bytes
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:01:23.879227) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 204.0 rd, 155.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 14.8 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(49.5) write-amplify(21.4) OK, records in: 5453, records dropped: 502 output_compression: NoCompression
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:01:23.879257) EVENT_LOG_v1 {"time_micros": 1760090483879244, "job": 16, "event": "compaction_finished", "compaction_time_micros": 78841, "compaction_time_cpu_micros": 50327, "output_level": 6, "num_output_files": 1, "total_output_size": 12217286, "num_input_records": 5453, "num_output_records": 4951, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090483879908, "job": 16, "event": "table_file_deletion", "file_number": 32}
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090483885101, "job": 16, "event": "table_file_deletion", "file_number": 30}
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:01:23.798605) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:01:23.885291) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:01:23.885301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:01:23.885305) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:01:23.885310) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:01:23 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:01:23.885314) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:01:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:24 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3e0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:24 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3f8001230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:24 compute-1 sudo[196975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymbghhbmjuqbcmdhlbsogkzqaidjdpik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090484.0390496-3993-257436792738107/AnsiballZ_blockinfile.py'
Oct 10 10:01:24 compute-1 sudo[196975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:24 compute-1 python3.9[196977]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:24 compute-1 sudo[196975]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:01:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:24.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:01:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:24 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:01:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:24 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:01:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:01:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:25.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:01:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100125 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:01:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:25 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3fc002700 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:25 compute-1 sudo[197128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzgnmxrwleyacapkjoguduhfzabhuerc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090485.105715-4020-128972278418147/AnsiballZ_command.py'
Oct 10 10:01:25 compute-1 sudo[197128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:25 compute-1 python3.9[197130]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:01:25 compute-1 sudo[197128]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:25 compute-1 ceph-mon[79167]: pgmap v456: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.4 KiB/s wr, 4 op/s
Oct 10 10:01:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:26 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3e40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:26 compute-1 sudo[197281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oojbshueezsdfzrtwtlzzefdirbxumop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090485.981335-4044-72652864878310/AnsiballZ_stat.py'
Oct 10 10:01:26 compute-1 sudo[197281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:26 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:26 compute-1 python3.9[197283]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:01:26 compute-1 sudo[197281]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:01:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:26.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:01:27 compute-1 sudo[197435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlqoaiioikpoiselzddeklrynjshbtpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090486.8132036-4068-140327206045753/AnsiballZ_command.py'
Oct 10 10:01:27 compute-1 sudo[197435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:01:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:27.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:01:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:01:27 compute-1 kernel: ganesha.nfsd[196507]: segfault at 50 ip 00007ff4b5ba132e sp 00007ff47affc210 error 4 in libntirpc.so.5.8[7ff4b5b86000+2c000] likely on CPU 1 (core 0, socket 1)
Oct 10 10:01:27 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 10:01:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[194134]: 10/10/2025 10:01:27 : epoch 68e8d965 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff3f8001d50 fd 38 proxy ignored for local
Oct 10 10:01:27 compute-1 python3.9[197437]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:01:27 compute-1 systemd[1]: Started Process Core Dump (PID 197439/UID 0).
Oct 10 10:01:27 compute-1 sudo[197435]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:27 compute-1 ceph-mon[79167]: pgmap v457: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.4 KiB/s wr, 4 op/s
Oct 10 10:01:28 compute-1 sudo[197593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkwyuqxceqcezjlutioozydsdchulhff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090487.71088-4092-38710485857711/AnsiballZ_file.py'
Oct 10 10:01:28 compute-1 sudo[197593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:28 compute-1 python3.9[197595]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:28 compute-1 sudo[197593]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:28 compute-1 systemd-coredump[197442]: Process 194138 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 45:
                                                    #0  0x00007ff4b5ba132e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 10 10:01:28 compute-1 systemd[1]: systemd-coredump@5-197439-0.service: Deactivated successfully.
Oct 10 10:01:28 compute-1 systemd[1]: systemd-coredump@5-197439-0.service: Consumed 1.176s CPU time.
Oct 10 10:01:28 compute-1 podman[197699]: 2025-10-10 10:01:28.79772218 +0000 UTC m=+0.047536326 container died e9ca41b00a4508e80f57e571972a3f5c37c766e07b204ef1aea54156f1cf77a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:01:28 compute-1 systemd[1]: var-lib-containers-storage-overlay-035d9430a00654dc757673aa7252531a3689382a674128a5bc47736d6c3eeaae-merged.mount: Deactivated successfully.
Oct 10 10:01:28 compute-1 podman[197699]: 2025-10-10 10:01:28.850098236 +0000 UTC m=+0.099912322 container remove e9ca41b00a4508e80f57e571972a3f5c37c766e07b204ef1aea54156f1cf77a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:01:28 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Main process exited, code=exited, status=139/n/a
Oct 10 10:01:28 compute-1 sudo[197766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zalaizdzcklfafzsavjyeaqbsvgckjnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090488.548715-4116-232742658046124/AnsiballZ_stat.py'
Oct 10 10:01:28 compute-1 sudo[197766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:28.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:29 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Failed with result 'exit-code'.
Oct 10 10:01:29 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 1.738s CPU time.
Oct 10 10:01:29 compute-1 python3.9[197777]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:01:29 compute-1 sudo[197766]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:29.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:29 compute-1 sudo[197916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrypedeoqpijeswadektqdupaybngtlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090488.548715-4116-232742658046124/AnsiballZ_copy.py'
Oct 10 10:01:29 compute-1 sudo[197916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:29 compute-1 python3.9[197918]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090488.548715-4116-232742658046124/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:29 compute-1 sudo[197916]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:29 compute-1 ceph-mon[79167]: pgmap v458: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 1.7 KiB/s wr, 6 op/s
Oct 10 10:01:30 compute-1 sudo[198068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsyqvgpxrdhedtyzjtgbvziigyqeddhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090490.0707424-4161-70759474067544/AnsiballZ_stat.py'
Oct 10 10:01:30 compute-1 sudo[198068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:30 compute-1 python3.9[198070]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:01:30 compute-1 sudo[198068]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:30.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:31 compute-1 sudo[198191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqtkyhoxoqmckxhwpnndbjbdkeyetpxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090490.0707424-4161-70759474067544/AnsiballZ_copy.py'
Oct 10 10:01:31 compute-1 sudo[198191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100131 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:01:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:31.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:31 compute-1 python3.9[198193]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090490.0707424-4161-70759474067544/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:31 compute-1 sudo[198191]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:31 compute-1 ceph-mon[79167]: pgmap v459: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:01:31 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:01:31 compute-1 sudo[198344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osoqqhyzrvcobumiociapirkonsboqae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090491.5710294-4206-122432080071250/AnsiballZ_stat.py'
Oct 10 10:01:31 compute-1 sudo[198344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:32 compute-1 python3.9[198346]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:01:32 compute-1 sudo[198344]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:01:32 compute-1 sudo[198467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxspbglbhwdqiwiagmtehqnezvocrqzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090491.5710294-4206-122432080071250/AnsiballZ_copy.py'
Oct 10 10:01:32 compute-1 sudo[198467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:32 compute-1 python3.9[198469]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090491.5710294-4206-122432080071250/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:01:32 compute-1 sudo[198467]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:32.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:33.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100133 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:01:33 compute-1 sudo[198620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmudmajmdnngjpdndighpudmnkragqck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090493.1249082-4251-2722837986808/AnsiballZ_systemd.py'
Oct 10 10:01:33 compute-1 sudo[198620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:33 compute-1 python3.9[198622]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:01:33 compute-1 systemd[1]: Reloading.
Oct 10 10:01:33 compute-1 ceph-mon[79167]: pgmap v460: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:01:33 compute-1 systemd-rc-local-generator[198649]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:01:33 compute-1 systemd-sysv-generator[198654]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:01:34 compute-1 systemd[1]: Reached target edpm_libvirt.target.
Oct 10 10:01:34 compute-1 sudo[198620]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:34 compute-1 sudo[198659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:01:34 compute-1 sudo[198659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:01:34 compute-1 sudo[198659]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:34 compute-1 sudo[198835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyfravqzauxnmspoylgssjoaprepobgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090494.4790103-4276-124766774333046/AnsiballZ_systemd.py'
Oct 10 10:01:34 compute-1 sudo[198835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:34.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:35 compute-1 python3.9[198837]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 10 10:01:35 compute-1 systemd[1]: Reloading.
Oct 10 10:01:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:35.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:35 compute-1 systemd-rc-local-generator[198860]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:01:35 compute-1 systemd-sysv-generator[198867]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:01:35 compute-1 systemd[1]: Reloading.
Oct 10 10:01:35 compute-1 systemd-rc-local-generator[198898]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:01:35 compute-1 systemd-sysv-generator[198905]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:01:35 compute-1 sudo[198835]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:35 compute-1 ceph-mon[79167]: pgmap v461: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 10 10:01:36 compute-1 sshd-session[141285]: Connection closed by 192.168.122.30 port 46468
Oct 10 10:01:36 compute-1 sshd-session[141282]: pam_unix(sshd:session): session closed for user zuul
Oct 10 10:01:36 compute-1 systemd[1]: session-53.scope: Deactivated successfully.
Oct 10 10:01:36 compute-1 systemd[1]: session-53.scope: Consumed 4min 3.026s CPU time.
Oct 10 10:01:36 compute-1 systemd-logind[789]: Session 53 logged out. Waiting for processes to exit.
Oct 10 10:01:36 compute-1 systemd-logind[789]: Removed session 53.
Oct 10 10:01:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:36.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:01:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:37.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:01:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:01:37 compute-1 ceph-mon[79167]: pgmap v462: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 10 10:01:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:01:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:38.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:01:39 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Scheduled restart job, restart counter is at 6.
Oct 10 10:01:39 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:01:39 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 1.738s CPU time.
Oct 10 10:01:39 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 10:01:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:01:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:39.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:01:39 compute-1 podman[198983]: 2025-10-10 10:01:39.49609815 +0000 UTC m=+0.064159992 container create 1e9266627ee094fc385673da2dcf1e0f64598dd63eee5bac13f6fa050a12d1c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:01:39 compute-1 podman[198983]: 2025-10-10 10:01:39.463543887 +0000 UTC m=+0.031605799 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:01:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89cc6d865d4c898d4960d03b8aec883d9ca3c44abcb0effad26fb1cfe27a78b/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 10:01:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89cc6d865d4c898d4960d03b8aec883d9ca3c44abcb0effad26fb1cfe27a78b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:01:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89cc6d865d4c898d4960d03b8aec883d9ca3c44abcb0effad26fb1cfe27a78b/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:01:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a89cc6d865d4c898d4960d03b8aec883d9ca3c44abcb0effad26fb1cfe27a78b/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.mssvzx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:01:39 compute-1 podman[198983]: 2025-10-10 10:01:39.583980968 +0000 UTC m=+0.152042830 container init 1e9266627ee094fc385673da2dcf1e0f64598dd63eee5bac13f6fa050a12d1c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 10 10:01:39 compute-1 podman[198983]: 2025-10-10 10:01:39.594047167 +0000 UTC m=+0.162108989 container start 1e9266627ee094fc385673da2dcf1e0f64598dd63eee5bac13f6fa050a12d1c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Oct 10 10:01:39 compute-1 bash[198983]: 1e9266627ee094fc385673da2dcf1e0f64598dd63eee5bac13f6fa050a12d1c9
Oct 10 10:01:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:39 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 10:01:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:39 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 10:01:39 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:01:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:39 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 10:01:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:39 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 10:01:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:39 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 10:01:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:39 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 10:01:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:39 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 10:01:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:39 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:01:39 compute-1 ceph-mon[79167]: pgmap v463: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 10 10:01:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:01:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:40.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:01:41 compute-1 podman[199041]: 2025-10-10 10:01:41.047768084 +0000 UTC m=+0.144227730 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 10 10:01:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:41.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:41 compute-1 sshd-session[199069]: Accepted publickey for zuul from 192.168.122.30 port 45246 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 10:01:41 compute-1 systemd-logind[789]: New session 54 of user zuul.
Oct 10 10:01:41 compute-1 systemd[1]: Started Session 54 of User zuul.
Oct 10 10:01:41 compute-1 sshd-session[199069]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 10:01:41 compute-1 ceph-mon[79167]: pgmap v464: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:01:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:01:42.194 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:01:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:01:42.194 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:01:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:01:42.194 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:01:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:01:42 compute-1 python3.9[199222]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 10:01:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:01:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:42.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:01:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:43.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:43 compute-1 ceph-mon[79167]: pgmap v465: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:01:44 compute-1 sudo[199377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjfiwlkrrjawdnlyfebeowrtuwojwtfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090503.446076-63-43370055372672/AnsiballZ_file.py'
Oct 10 10:01:44 compute-1 sudo[199377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:44 compute-1 python3.9[199379]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:01:44 compute-1 sudo[199377]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:44 compute-1 sudo[199539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpjlgfzwqxdzkvecmcwlejpompvhulth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090504.4548216-63-142031317364042/AnsiballZ_file.py'
Oct 10 10:01:44 compute-1 sudo[199539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:44 compute-1 podman[199503]: 2025-10-10 10:01:44.844354949 +0000 UTC m=+0.085673449 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:01:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:01:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:44.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:01:45 compute-1 python3.9[199547]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:01:45 compute-1 sudo[199539]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:45.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:45 compute-1 sudo[199699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buckpxwuflnmkssgftgjaorwfalxenxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090505.223-63-246861733770457/AnsiballZ_file.py'
Oct 10 10:01:45 compute-1 sudo[199699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:45 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:01:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:45 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:01:45 compute-1 python3.9[199701]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:01:45 compute-1 sudo[199699]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:45 compute-1 ceph-mon[79167]: pgmap v466: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:01:46 compute-1 sudo[199851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aikzrppjkyadfwucmorxzgxlbtooesba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090505.986861-63-159089652741605/AnsiballZ_file.py'
Oct 10 10:01:46 compute-1 sudo[199851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:46 compute-1 python3.9[199853]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 10 10:01:46 compute-1 sudo[199851]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:46.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:46 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:01:47 compute-1 sudo[200003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mogoernszemtvtbrbxdsymrvramrlqao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090506.7192957-63-20789146978281/AnsiballZ_file.py'
Oct 10 10:01:47 compute-1 sudo[200003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:47.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:01:47 compute-1 python3.9[200005]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:01:47 compute-1 sudo[200003]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:47 compute-1 ceph-mon[79167]: pgmap v467: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:01:48 compute-1 sudo[200156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spztcdrsuhockxcijxzzbpdxtjiccxph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090507.5639212-171-175203275061890/AnsiballZ_stat.py'
Oct 10 10:01:48 compute-1 sudo[200156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:48 compute-1 python3.9[200158]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:01:48 compute-1 sudo[200156]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:48.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:49 compute-1 sudo[200310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvcjwdlpmdfbspxinlujxzhhvrhsvjtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090508.524611-195-53416978145340/AnsiballZ_systemd.py'
Oct 10 10:01:49 compute-1 sudo[200310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:01:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:49.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:01:49 compute-1 python3.9[200312]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:01:49 compute-1 systemd[1]: Reloading.
Oct 10 10:01:49 compute-1 systemd-rc-local-generator[200344]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:01:49 compute-1 systemd-sysv-generator[200347]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:01:49 compute-1 sudo[200310]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:50 compute-1 ceph-mon[79167]: pgmap v468: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:01:50 compute-1 sudo[200501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oencuokhmxwbyovharyzglwhveanfszz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090510.1695027-219-243605605499691/AnsiballZ_service_facts.py'
Oct 10 10:01:50 compute-1 sudo[200501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:50 compute-1 python3.9[200503]: ansible-ansible.builtin.service_facts Invoked
Oct 10 10:01:50 compute-1 network[200520]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 10:01:50 compute-1 network[200521]: 'network-scripts' will be removed from distribution in near future.
Oct 10 10:01:50 compute-1 network[200522]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 10:01:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:01:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:50.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:01:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:01:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:51.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:01:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:01:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 10:01:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 10:01:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 10:01:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 10:01:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 10:01:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 10:01:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:01:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:01:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:01:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 10:01:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:01:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 10:01:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 10:01:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 10:01:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 10:01:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 10:01:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 10:01:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 10:01:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 10:01:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 10:01:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 10:01:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 10:01:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 10:01:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 10:01:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 10:01:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:51 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 10:01:52 compute-1 ceph-mon[79167]: pgmap v469: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:01:52 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:52 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8068000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:01:52 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:52 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f805c0014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:52.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:53.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:53 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8044000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:54 compute-1 ceph-mon[79167]: pgmap v470: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:01:54 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:54 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f803c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:54 compute-1 sudo[200612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:01:54 compute-1 sudo[200612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:01:54 compute-1 sudo[200612]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:54 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:54 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8048000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:01:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:54.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:01:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:55.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100155 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:01:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:55 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f805c0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:56 compute-1 ceph-mon[79167]: pgmap v471: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:01:56 compute-1 sudo[200501]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:56 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:56 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f80440016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:56 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:56 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f803c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:56 compute-1 sudo[200837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aemlurfvpxxwluolvqttzfxnurlxtfei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090516.4792986-243-271518809426009/AnsiballZ_systemd.py'
Oct 10 10:01:56 compute-1 sudo[200837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:56.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:57 compute-1 python3.9[200839]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:01:57 compute-1 systemd[1]: Reloading.
Oct 10 10:01:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:01:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:57.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:01:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:01:57 compute-1 systemd-sysv-generator[200873]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:01:57 compute-1 systemd-rc-local-generator[200869]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:01:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:57 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8048001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:57 compute-1 sudo[200837]: pam_unix(sudo:session): session closed for user root
Oct 10 10:01:58 compute-1 ceph-mon[79167]: pgmap v472: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:01:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:58 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f805c0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:58 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f80440016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:58 compute-1 python3.9[201027]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:01:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:01:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:01:58.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:01:59 compute-1 sudo[201177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snjghgxsxmjajmckkmkxpujkfnyhhdld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090518.6600444-294-231595604765895/AnsiballZ_podman_container.py'
Oct 10 10:01:59 compute-1 sudo[201177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:01:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:01:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:01:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:01:59.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:01:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:01:59 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f803c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:01:59 compute-1 python3.9[201179]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 10 10:02:00 compute-1 ceph-mon[79167]: pgmap v473: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:02:00 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 10:02:00 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 10:02:00 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:02:00 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8048001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:00 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:02:00 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f805c0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:00.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:01 compute-1 podman[201194]: 2025-10-10 10:02:01.044461884 +0000 UTC m=+1.452868035 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 10 10:02:01 compute-1 ceph-mon[79167]: pgmap v474: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:02:01 compute-1 podman[201254]: 2025-10-10 10:02:01.261480082 +0000 UTC m=+0.065781008 container create 4356f2883dd572f63679a65d285b5cdfc4cc9df809fcc3e76292adc7eebb8175 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:02:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:01.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:01 compute-1 NetworkManager[44982]: <info>  [1760090521.2956] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/23)
Oct 10 10:02:01 compute-1 podman[201254]: 2025-10-10 10:02:01.229410751 +0000 UTC m=+0.033711737 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 10 10:02:01 compute-1 kernel: podman0: port 1(veth0) entered blocking state
Oct 10 10:02:01 compute-1 kernel: podman0: port 1(veth0) entered disabled state
Oct 10 10:02:01 compute-1 kernel: veth0: entered allmulticast mode
Oct 10 10:02:01 compute-1 kernel: veth0: entered promiscuous mode
Oct 10 10:02:01 compute-1 NetworkManager[44982]: <info>  [1760090521.3278] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/24)
Oct 10 10:02:01 compute-1 kernel: podman0: port 1(veth0) entered blocking state
Oct 10 10:02:01 compute-1 kernel: podman0: port 1(veth0) entered forwarding state
Oct 10 10:02:01 compute-1 NetworkManager[44982]: <info>  [1760090521.3312] device (veth0): carrier: link connected
Oct 10 10:02:01 compute-1 NetworkManager[44982]: <info>  [1760090521.3317] device (podman0): carrier: link connected
Oct 10 10:02:01 compute-1 systemd-udevd[201285]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 10:02:01 compute-1 systemd-udevd[201282]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 10:02:01 compute-1 NetworkManager[44982]: <info>  [1760090521.3773] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 10:02:01 compute-1 NetworkManager[44982]: <info>  [1760090521.3782] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 10 10:02:01 compute-1 NetworkManager[44982]: <info>  [1760090521.3794] device (podman0): Activation: starting connection 'podman0' (81076ce4-9a9e-4b26-b84c-99af0a2be891)
Oct 10 10:02:01 compute-1 NetworkManager[44982]: <info>  [1760090521.3795] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 10 10:02:01 compute-1 NetworkManager[44982]: <info>  [1760090521.3806] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 10 10:02:01 compute-1 NetworkManager[44982]: <info>  [1760090521.3808] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 10 10:02:01 compute-1 NetworkManager[44982]: <info>  [1760090521.3811] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 10 10:02:01 compute-1 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 10 10:02:01 compute-1 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 10 10:02:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:02:01 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f805c0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:01 compute-1 NetworkManager[44982]: <info>  [1760090521.4160] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 10 10:02:01 compute-1 NetworkManager[44982]: <info>  [1760090521.4162] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 10 10:02:01 compute-1 NetworkManager[44982]: <info>  [1760090521.4172] device (podman0): Activation: successful, device activated.
Oct 10 10:02:01 compute-1 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct 10 10:02:01 compute-1 systemd[1]: Started libpod-conmon-4356f2883dd572f63679a65d285b5cdfc4cc9df809fcc3e76292adc7eebb8175.scope.
Oct 10 10:02:01 compute-1 systemd[1]: Started libcrun container.
Oct 10 10:02:01 compute-1 podman[201254]: 2025-10-10 10:02:01.677234885 +0000 UTC m=+0.481535811 container init 4356f2883dd572f63679a65d285b5cdfc4cc9df809fcc3e76292adc7eebb8175 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:02:01 compute-1 podman[201254]: 2025-10-10 10:02:01.685852301 +0000 UTC m=+0.490153227 container start 4356f2883dd572f63679a65d285b5cdfc4cc9df809fcc3e76292adc7eebb8175 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:02:01 compute-1 podman[201254]: 2025-10-10 10:02:01.689601884 +0000 UTC m=+0.493902810 container attach 4356f2883dd572f63679a65d285b5cdfc4cc9df809fcc3e76292adc7eebb8175 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 10 10:02:01 compute-1 iscsid_config[201413]: iqn.1994-05.com.redhat:fcb4321b495f
Oct 10 10:02:01 compute-1 systemd[1]: libpod-4356f2883dd572f63679a65d285b5cdfc4cc9df809fcc3e76292adc7eebb8175.scope: Deactivated successfully.
Oct 10 10:02:01 compute-1 podman[201254]: 2025-10-10 10:02:01.694978861 +0000 UTC m=+0.499279777 container died 4356f2883dd572f63679a65d285b5cdfc4cc9df809fcc3e76292adc7eebb8175 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 10 10:02:01 compute-1 kernel: podman0: port 1(veth0) entered disabled state
Oct 10 10:02:01 compute-1 kernel: veth0 (unregistering): left allmulticast mode
Oct 10 10:02:01 compute-1 kernel: veth0 (unregistering): left promiscuous mode
Oct 10 10:02:01 compute-1 kernel: podman0: port 1(veth0) entered disabled state
Oct 10 10:02:01 compute-1 NetworkManager[44982]: <info>  [1760090521.7746] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 10:02:02 compute-1 systemd[1]: run-netns-netns\x2d342413df\x2de8a7\x2d9394\x2d7d06\x2d1c893eba8cfb.mount: Deactivated successfully.
Oct 10 10:02:02 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4356f2883dd572f63679a65d285b5cdfc4cc9df809fcc3e76292adc7eebb8175-userdata-shm.mount: Deactivated successfully.
Oct 10 10:02:02 compute-1 systemd[1]: var-lib-containers-storage-overlay-589e2b0bc08fc8cef88f6788646d4297ff98dfa271e5e29c49a9e276f23372ba-merged.mount: Deactivated successfully.
Oct 10 10:02:02 compute-1 podman[201254]: 2025-10-10 10:02:02.141748967 +0000 UTC m=+0.946049903 container remove 4356f2883dd572f63679a65d285b5cdfc4cc9df809fcc3e76292adc7eebb8175 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 10 10:02:02 compute-1 systemd[1]: libpod-conmon-4356f2883dd572f63679a65d285b5cdfc4cc9df809fcc3e76292adc7eebb8175.scope: Deactivated successfully.
Oct 10 10:02:02 compute-1 python3.9[201179]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f /usr/sbin/iscsi-iname
Oct 10 10:02:02 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:02:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:02:02 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f803c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:02 compute-1 python3.9[201179]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: 
                                             DEPRECATED command:
                                             It is recommended to use Quadlets for running containers and pods under systemd.
                                             
                                             Please refer to podman-systemd.unit(5) for details.
                                             Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct 10 10:02:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:02:02 compute-1 sudo[201177]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:02:02 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8048001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:02 compute-1 sudo[201649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsxubymandpwudgblhzwprkwicquqioc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090522.5788796-318-232663492197741/AnsiballZ_stat.py'
Oct 10 10:02:02 compute-1 sudo[201649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:02:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:02.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:02:03 compute-1 python3.9[201651]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:02:03 compute-1 sudo[201649]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:03 compute-1 ceph-mon[79167]: pgmap v475: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:02:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:03.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:02:03 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8044002720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:03 compute-1 sudo[201773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhmbdzxkwixneqwdejwicoarhuzivvja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090522.5788796-318-232663492197741/AnsiballZ_copy.py'
Oct 10 10:02:03 compute-1 sudo[201773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:04 compute-1 python3.9[201775]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090522.5788796-318-232663492197741/.source.iscsi _original_basename=.1e4w866_ follow=False checksum=538f0a6547d2d2c444bf7bb1ebe39f3e6e4f45dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:04 compute-1 sudo[201773]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:04 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:02:04 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f805c0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:04 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:02:04 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f803c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:04 compute-1 sudo[201925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clvthbaxjewtdpxhgsmtozbbmprnymsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090524.2403064-363-171654541779447/AnsiballZ_file.py'
Oct 10 10:02:04 compute-1 sudo[201925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:04 compute-1 python3.9[201927]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:04 compute-1 sudo[201925]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 10:02:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:04.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 10:02:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:02:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:05.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:02:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:02:05 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8048002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:05 compute-1 python3.9[202078]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:02:05 compute-1 ceph-mon[79167]: pgmap v476: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:02:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:02:06 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8044002720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:06 compute-1 sudo[202230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmvdduscyhtxistzapyzbrjfgrryztxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090525.8933184-414-152594890451786/AnsiballZ_lineinfile.py'
Oct 10 10:02:06 compute-1 sudo[202230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:02:06 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f805c0021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:06 compute-1 python3.9[202232]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:06 compute-1 sudo[202230]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:06.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:07.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:02:07 compute-1 sudo[202383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dajcabmptqjgprvpvnqgpjrbshcueurt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090527.019494-441-76175247004395/AnsiballZ_file.py'
Oct 10 10:02:07 compute-1 sudo[202383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:02:07 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f803c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:07 compute-1 python3.9[202385]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:02:07 compute-1 sudo[202383]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:07 compute-1 ceph-mon[79167]: pgmap v477: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:02:08 compute-1 sudo[202535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdtgabtjhctsdkxsljxbtvnwvddqujyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090527.8153305-465-143164552049788/AnsiballZ_stat.py'
Oct 10 10:02:08 compute-1 sudo[202535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:02:08 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8048002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:08 compute-1 python3.9[202537]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:02:08 compute-1 sudo[202535]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:02:08 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8044002720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:08 compute-1 sudo[202613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irmjclvclnjnypptezlajqzchcxdvfxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090527.8153305-465-143164552049788/AnsiballZ_file.py'
Oct 10 10:02:08 compute-1 sudo[202613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:08 compute-1 python3.9[202615]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:02:08 compute-1 sudo[202613]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:09.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:09.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:09 compute-1 kernel: ganesha.nfsd[200532]: segfault at 50 ip 00007f811816032e sp 00007f80e67fb210 error 4 in libntirpc.so.5.8[7f8118145000+2c000] likely on CPU 7 (core 0, socket 7)
Oct 10 10:02:09 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 10:02:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[198999]: 10/10/2025 10:02:09 : epoch 68e8d983 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f805c0021d0 fd 39 proxy ignored for local
Oct 10 10:02:09 compute-1 systemd[1]: Started Process Core Dump (PID 202740/UID 0).
Oct 10 10:02:09 compute-1 sudo[202768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihjgjcjrlolezrluizlpknxdohtzxwci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090529.152783-465-77230385579379/AnsiballZ_stat.py'
Oct 10 10:02:09 compute-1 sudo[202768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:09 compute-1 ceph-mon[79167]: pgmap v478: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:02:09 compute-1 python3.9[202770]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:02:09 compute-1 sudo[202768]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:10 compute-1 sudo[202846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkxzvdtbcisxzyjxukkdlsbhuqvmoweb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090529.152783-465-77230385579379/AnsiballZ_file.py'
Oct 10 10:02:10 compute-1 sudo[202846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:10 compute-1 python3.9[202848]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:02:10 compute-1 sudo[202846]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:10 compute-1 systemd-coredump[202743]: Process 199003 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 43:
                                                    #0  0x00007f811816032e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 10 10:02:10 compute-1 systemd[1]: systemd-coredump@6-202740-0.service: Deactivated successfully.
Oct 10 10:02:10 compute-1 systemd[1]: systemd-coredump@6-202740-0.service: Consumed 1.166s CPU time.
Oct 10 10:02:10 compute-1 podman[202964]: 2025-10-10 10:02:10.753503857 +0000 UTC m=+0.045442118 container died 1e9266627ee094fc385673da2dcf1e0f64598dd63eee5bac13f6fa050a12d1c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 10 10:02:10 compute-1 systemd[1]: var-lib-containers-storage-overlay-a89cc6d865d4c898d4960d03b8aec883d9ca3c44abcb0effad26fb1cfe27a78b-merged.mount: Deactivated successfully.
Oct 10 10:02:10 compute-1 sudo[203017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-keyysxyiexsmlagyrojyzjhpetehecxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090530.496826-534-176851370664346/AnsiballZ_file.py'
Oct 10 10:02:10 compute-1 sudo[203017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:10 compute-1 podman[202964]: 2025-10-10 10:02:10.822006167 +0000 UTC m=+0.113944428 container remove 1e9266627ee094fc385673da2dcf1e0f64598dd63eee5bac13f6fa050a12d1c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 10:02:10 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Main process exited, code=exited, status=139/n/a
Oct 10 10:02:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:11.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:11 compute-1 python3.9[203019]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:11 compute-1 sudo[203017]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:11 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Failed with result 'exit-code'.
Oct 10 10:02:11 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 1.840s CPU time.
Oct 10 10:02:11 compute-1 podman[203050]: 2025-10-10 10:02:11.201143256 +0000 UTC m=+0.110479015 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 10 10:02:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:11.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:11 compute-1 sudo[203225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqunephqerbnxrgtawvdgyahzmcohjxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090531.2889543-558-179283008727285/AnsiballZ_stat.py'
Oct 10 10:02:11 compute-1 sudo[203225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:11 compute-1 ceph-mon[79167]: pgmap v479: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:02:11 compute-1 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 10 10:02:11 compute-1 python3.9[203227]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:02:11 compute-1 sudo[203225]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:12 compute-1 sudo[203303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfyvawvqqaxlqngxtdzvmvogeoukrwgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090531.2889543-558-179283008727285/AnsiballZ_file.py'
Oct 10 10:02:12 compute-1 sudo[203303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:02:12 compute-1 python3.9[203305]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:12 compute-1 sudo[203303]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:02:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:13.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:02:13 compute-1 sudo[203455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxmrouodhngnjtwuaeafderapxhgnjkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090532.7310812-594-243792130096922/AnsiballZ_stat.py'
Oct 10 10:02:13 compute-1 sudo[203455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:13.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:13 compute-1 python3.9[203457]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:02:13 compute-1 sudo[203455]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:13 compute-1 sudo[203534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhuzwxhpgfuvcveogstsulmkplfvjfxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090532.7310812-594-243792130096922/AnsiballZ_file.py'
Oct 10 10:02:13 compute-1 sudo[203534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:13 compute-1 ceph-mon[79167]: pgmap v480: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 10:02:13 compute-1 python3.9[203536]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:13 compute-1 sudo[203534]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:14 compute-1 sudo[203660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:02:14 compute-1 sudo[203660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:02:14 compute-1 sudo[203660]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:14 compute-1 sudo[203709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbtgsmyozuwjnypzcpzpdgrkqqfxkyyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090534.091295-630-55141139264875/AnsiballZ_systemd.py'
Oct 10 10:02:14 compute-1 sudo[203709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:14 compute-1 python3.9[203713]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:02:14 compute-1 systemd[1]: Reloading.
Oct 10 10:02:14 compute-1 systemd-sysv-generator[203745]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:02:14 compute-1 systemd-rc-local-generator[203741]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:02:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:15.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:15 compute-1 sudo[203709]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:15 compute-1 podman[203750]: 2025-10-10 10:02:15.274643081 +0000 UTC m=+0.074392683 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 10 10:02:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:15.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100215 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:02:15 compute-1 ceph-mon[79167]: pgmap v481: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:02:15 compute-1 sudo[203920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqfiahtblhxgnilmelprvddngfxqlyqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090535.4958906-655-142535038022839/AnsiballZ_stat.py'
Oct 10 10:02:15 compute-1 sudo[203920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:16 compute-1 python3.9[203922]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:02:16 compute-1 sudo[203920]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:16 compute-1 sudo[203998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uonuxgrxwicmnmmtkcatveihjxkbtkyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090535.4958906-655-142535038022839/AnsiballZ_file.py'
Oct 10 10:02:16 compute-1 sudo[203998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:16 compute-1 python3.9[204000]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:16 compute-1 sudo[203998]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:02:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:02:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:17.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:02:17 compute-1 sudo[204150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtvgswfsmtqqtbdcatddcoaifmzwqdfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090536.857773-690-230580030722497/AnsiballZ_stat.py'
Oct 10 10:02:17 compute-1 sudo[204150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:17.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:02:17 compute-1 python3.9[204152]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:02:17 compute-1 sudo[204150]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:17 compute-1 sudo[204229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsjbqfojgcpcxroyuramyuwiskvzummu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090536.857773-690-230580030722497/AnsiballZ_file.py'
Oct 10 10:02:17 compute-1 sudo[204229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:17 compute-1 ceph-mon[79167]: pgmap v482: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:02:17 compute-1 python3.9[204231]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:17 compute-1 sudo[204229]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:18 compute-1 sudo[204381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khlehpsmuhtqegmbuhforuppmldizwcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090538.2054503-726-38264690841141/AnsiballZ_systemd.py'
Oct 10 10:02:18 compute-1 sudo[204381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:18 compute-1 python3.9[204383]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:02:18 compute-1 systemd[1]: Reloading.
Oct 10 10:02:18 compute-1 systemd-rc-local-generator[204406]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:02:18 compute-1 systemd-sysv-generator[204409]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:02:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:02:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:19.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:02:19 compute-1 sudo[204419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:02:19 compute-1 sudo[204419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:02:19 compute-1 sudo[204419]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:19 compute-1 systemd[1]: Starting Create netns directory...
Oct 10 10:02:19 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 10 10:02:19 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 10 10:02:19 compute-1 systemd[1]: Finished Create netns directory.
Oct 10 10:02:19 compute-1 sudo[204446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:02:19 compute-1 sudo[204446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:02:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:19.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:19 compute-1 sudo[204381]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:19 compute-1 ceph-mon[79167]: pgmap v483: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:02:19 compute-1 sudo[204446]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:20 compute-1 sudo[204656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixtbglsebugoyjirdjwhpbrujbngsajm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090539.6951766-756-64553463168231/AnsiballZ_file.py'
Oct 10 10:02:20 compute-1 sudo[204656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:20 compute-1 python3.9[204658]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:02:20 compute-1 sudo[204656]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:20 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:02:20 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:02:20 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:02:20 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:02:20 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:02:20 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:02:20 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:02:20 compute-1 sudo[204808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shqcnlephmrsdzmnrxwbytifzeuxupbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090540.5120976-781-139326340207066/AnsiballZ_stat.py'
Oct 10 10:02:20 compute-1 sudo[204808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:21.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:21 compute-1 python3.9[204810]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:02:21 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Scheduled restart job, restart counter is at 7.
Oct 10 10:02:21 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:02:21 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 1.840s CPU time.
Oct 10 10:02:21 compute-1 sudo[204808]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:21 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 10:02:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:21.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:21 compute-1 podman[204932]: 2025-10-10 10:02:21.486094388 +0000 UTC m=+0.071786251 container create 4717a0fb642c4e89fc048e37c1ba16280d16bc6e7bc4d2c9d3fc1b766a49154e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 10 10:02:21 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38e0c120a6bc62a555c2bf449eb1333a41471a8f274a6f9e21cf4b9732a074c/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 10:02:21 compute-1 podman[204932]: 2025-10-10 10:02:21.455008955 +0000 UTC m=+0.040700828 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:02:21 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38e0c120a6bc62a555c2bf449eb1333a41471a8f274a6f9e21cf4b9732a074c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:02:21 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38e0c120a6bc62a555c2bf449eb1333a41471a8f274a6f9e21cf4b9732a074c/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:02:21 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c38e0c120a6bc62a555c2bf449eb1333a41471a8f274a6f9e21cf4b9732a074c/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.mssvzx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:02:21 compute-1 sudo[204998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oroduqimrubpgixchmnbworvyymnenuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090540.5120976-781-139326340207066/AnsiballZ_copy.py'
Oct 10 10:02:21 compute-1 sudo[204998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:21 compute-1 podman[204932]: 2025-10-10 10:02:21.568290805 +0000 UTC m=+0.153982668 container init 4717a0fb642c4e89fc048e37c1ba16280d16bc6e7bc4d2c9d3fc1b766a49154e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0)
Oct 10 10:02:21 compute-1 podman[204932]: 2025-10-10 10:02:21.580401797 +0000 UTC m=+0.166093630 container start 4717a0fb642c4e89fc048e37c1ba16280d16bc6e7bc4d2c9d3fc1b766a49154e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 10 10:02:21 compute-1 bash[204932]: 4717a0fb642c4e89fc048e37c1ba16280d16bc6e7bc4d2c9d3fc1b766a49154e
Oct 10 10:02:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:21 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 10:02:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:21 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 10:02:21 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:02:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:21 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 10:02:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:21 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 10:02:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:21 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 10:02:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:21 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 10:02:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:21 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 10:02:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:21 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:02:21 compute-1 python3.9[205001]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090540.5120976-781-139326340207066/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:02:21 compute-1 sudo[204998]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:22 compute-1 ceph-mon[79167]: pgmap v484: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:02:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:02:22 compute-1 sudo[205190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osnzuhbwgywxeddgnrggghrajgapgwjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090542.4998221-832-60171446892762/AnsiballZ_file.py'
Oct 10 10:02:22 compute-1 sudo[205190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:02:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:23.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:02:23 compute-1 python3.9[205192]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:02:23 compute-1 sudo[205190]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:02:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:23.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:02:23 compute-1 sudo[205343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uelhqadzaoixqgovztsqljqpsimeguic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090543.3185034-855-167800353977167/AnsiballZ_stat.py'
Oct 10 10:02:23 compute-1 sudo[205343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:23 compute-1 python3.9[205345]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:02:23 compute-1 sudo[205343]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:24 compute-1 ceph-mon[79167]: pgmap v485: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:02:24 compute-1 sudo[205466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfnxwyxxmeomxpnwhwzsawwjekpwyzgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090543.3185034-855-167800353977167/AnsiballZ_copy.py'
Oct 10 10:02:24 compute-1 sudo[205466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:24 compute-1 python3.9[205468]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090543.3185034-855-167800353977167/.source.json _original_basename=.9yz8mjl1 follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:24 compute-1 sudo[205466]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:02:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:25.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:02:25 compute-1 sudo[205618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yllrawmlblgxjlrcxiijrzwypsluhobb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090544.8016713-900-46365767748710/AnsiballZ_file.py'
Oct 10 10:02:25 compute-1 sudo[205618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:25 compute-1 sudo[205621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:02:25 compute-1 sudo[205621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:02:25 compute-1 sudo[205621]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:02:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:25.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:02:25 compute-1 python3.9[205620]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:25 compute-1 sudo[205618]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:26 compute-1 ceph-mon[79167]: pgmap v486: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:02:26 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:02:26 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:02:26 compute-1 sudo[205796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upxmnkzkrvihmnpuzcebgttcpmxgumsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090545.7929938-924-146519451676682/AnsiballZ_stat.py'
Oct 10 10:02:26 compute-1 sudo[205796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:26 compute-1 sudo[205796]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:26 compute-1 sudo[205919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxtmvavcsarcwdoegjdjmrndmfhxscra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090545.7929938-924-146519451676682/AnsiballZ_copy.py'
Oct 10 10:02:26 compute-1 sudo[205919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:26 compute-1 sudo[205919]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:02:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:27.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:02:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:27.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:02:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:27 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:02:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:27 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:02:27 compute-1 sudo[206072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlmwswvzratzifemjmztonqoxvekgsrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090547.3578126-975-279607787220793/AnsiballZ_container_config_data.py'
Oct 10 10:02:27 compute-1 sudo[206072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:28 compute-1 ceph-mon[79167]: pgmap v487: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:02:28 compute-1 python3.9[206074]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct 10 10:02:28 compute-1 sudo[206072]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:28 compute-1 sudo[206224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iorkaddyxmfhnitoxpvaorssxltgstza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090548.438442-1002-9712676100331/AnsiballZ_container_config_hash.py'
Oct 10 10:02:28 compute-1 sudo[206224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:29.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:29 compute-1 python3.9[206226]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 10 10:02:29 compute-1 sudo[206224]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:29.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:30 compute-1 ceph-mon[79167]: pgmap v488: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 596 B/s wr, 2 op/s
Oct 10 10:02:30 compute-1 sudo[206377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgrphrdaaesqplrfxsaipctvylbsdpmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090549.6284125-1029-6049610855252/AnsiballZ_podman_container_info.py'
Oct 10 10:02:30 compute-1 sudo[206377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:30 compute-1 python3.9[206379]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 10 10:02:30 compute-1 sudo[206377]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:31.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:31.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:32 compute-1 ceph-mon[79167]: pgmap v489: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 596 B/s wr, 2 op/s
Oct 10 10:02:32 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:02:32 compute-1 sudo[206556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkfucdjzghkwqbfyfnjzretokxlqrzvl ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760090551.525817-1068-43409929506658/AnsiballZ_edpm_container_manage.py'
Oct 10 10:02:32 compute-1 sudo[206556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:02:32 compute-1 python3[206558]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 10 10:02:32 compute-1 podman[206594]: 2025-10-10 10:02:32.654488614 +0000 UTC m=+0.057121219 container create 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:02:32 compute-1 podman[206594]: 2025-10-10 10:02:32.622890607 +0000 UTC m=+0.025523282 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 10 10:02:32 compute-1 python3[206558]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 10 10:02:32 compute-1 sudo[206556]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:02:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:33.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:02:33 compute-1 ceph-mon[79167]: pgmap v490: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:02:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:33.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:33 compute-1 sudo[206783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoaxygiksxoxmadndpieelvtqaerajjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090553.2498481-1092-173698484388286/AnsiballZ_stat.py'
Oct 10 10:02:33 compute-1 sudo[206783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:02:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 10:02:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 10:02:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 10:02:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 10:02:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 10:02:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 10:02:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:02:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:02:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:02:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 10:02:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:02:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 10:02:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 10:02:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 10:02:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 10:02:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 10:02:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 10:02:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 10:02:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 10:02:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 10:02:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 10:02:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 10:02:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 10:02:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 10:02:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 10:02:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:33 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 10:02:33 compute-1 python3.9[206785]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:02:33 compute-1 sudo[206783]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:34 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:34 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70b0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:34 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:34 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70a4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:34 compute-1 sudo[206954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgkxzunkfnfhncxxsklzclmcdyiomlzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090554.1691961-1119-188683155628482/AnsiballZ_file.py'
Oct 10 10:02:34 compute-1 sudo[206954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:34 compute-1 sudo[206953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:02:34 compute-1 sudo[206953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:02:34 compute-1 sudo[206953]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:34 compute-1 python3.9[206968]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:34 compute-1 sudo[206954]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:35 compute-1 sudo[207054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxdvhvqtdtukjubliojbsyrofijemvhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090554.1691961-1119-188683155628482/AnsiballZ_stat.py'
Oct 10 10:02:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:35.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:35 compute-1 sudo[207054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:35 compute-1 python3.9[207056]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:02:35 compute-1 sudo[207054]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:35.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:35 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f708c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:35 compute-1 ceph-mon[79167]: pgmap v491: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:02:35 compute-1 sudo[207206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utbjbhxmoqkvnohlnthjdqtyckljdafr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090555.310925-1119-214567832608445/AnsiballZ_copy.py'
Oct 10 10:02:35 compute-1 sudo[207206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:35 compute-1 python3.9[207208]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760090555.310925-1119-214567832608445/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:35 compute-1 sudo[207206]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:36 compute-1 sudo[207282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfzngzrxrgvapfyaqxtvqxguchftkvjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090555.310925-1119-214567832608445/AnsiballZ_systemd.py'
Oct 10 10:02:36 compute-1 sudo[207282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:36 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7084000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:36 compute-1 python3.9[207284]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 10:02:36 compute-1 systemd[1]: Reloading.
Oct 10 10:02:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:36 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f709c001230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:36 compute-1 systemd-rc-local-generator[207312]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:02:36 compute-1 systemd-sysv-generator[207315]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:02:36 compute-1 sudo[207282]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:02:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:37.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:02:37 compute-1 sudo[207394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjemablylexqjniigfyvmygmiltwocil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090555.310925-1119-214567832608445/AnsiballZ_systemd.py'
Oct 10 10:02:37 compute-1 sudo[207394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:02:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:37.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100237 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:02:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:37 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70a4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:37 compute-1 python3.9[207396]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:02:37 compute-1 systemd[1]: Reloading.
Oct 10 10:02:37 compute-1 systemd-sysv-generator[207428]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:02:37 compute-1 systemd-rc-local-generator[207422]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:02:37 compute-1 ceph-mon[79167]: pgmap v492: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:02:37 compute-1 systemd[1]: Starting iscsid container...
Oct 10 10:02:38 compute-1 systemd[1]: Started libcrun container.
Oct 10 10:02:38 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/326947f4e00411a76076b87d809a6c8091bd2d003bb135b6d97bcf0ceddb2ea2/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct 10 10:02:38 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/326947f4e00411a76076b87d809a6c8091bd2d003bb135b6d97bcf0ceddb2ea2/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 10 10:02:38 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/326947f4e00411a76076b87d809a6c8091bd2d003bb135b6d97bcf0ceddb2ea2/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 10 10:02:38 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143.
Oct 10 10:02:38 compute-1 podman[207436]: 2025-10-10 10:02:38.115566481 +0000 UTC m=+0.170159441 container init 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2)
Oct 10 10:02:38 compute-1 iscsid[207451]: + sudo -E kolla_set_configs
Oct 10 10:02:38 compute-1 podman[207436]: 2025-10-10 10:02:38.1446577 +0000 UTC m=+0.199250650 container start 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 10:02:38 compute-1 podman[207436]: iscsid
Oct 10 10:02:38 compute-1 systemd[1]: Started iscsid container.
Oct 10 10:02:38 compute-1 sudo[207458]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 10 10:02:38 compute-1 systemd[1]: Created slice User Slice of UID 0.
Oct 10 10:02:38 compute-1 sudo[207394]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:38 compute-1 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 10 10:02:38 compute-1 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 10 10:02:38 compute-1 systemd[1]: Starting User Manager for UID 0...
Oct 10 10:02:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:38 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f708c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:38 compute-1 systemd[207477]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Oct 10 10:02:38 compute-1 podman[207457]: 2025-10-10 10:02:38.266829434 +0000 UTC m=+0.100290844 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Oct 10 10:02:38 compute-1 systemd[1]: 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143-2e6a7298cd651085.service: Main process exited, code=exited, status=1/FAILURE
Oct 10 10:02:38 compute-1 systemd[1]: 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143-2e6a7298cd651085.service: Failed with result 'exit-code'.
Oct 10 10:02:38 compute-1 systemd[207477]: Queued start job for default target Main User Target.
Oct 10 10:02:38 compute-1 systemd[207477]: Created slice User Application Slice.
Oct 10 10:02:38 compute-1 systemd[207477]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 10 10:02:38 compute-1 systemd[207477]: Started Daily Cleanup of User's Temporary Directories.
Oct 10 10:02:38 compute-1 systemd[207477]: Reached target Paths.
Oct 10 10:02:38 compute-1 systemd[207477]: Reached target Timers.
Oct 10 10:02:38 compute-1 systemd[207477]: Starting D-Bus User Message Bus Socket...
Oct 10 10:02:38 compute-1 systemd[207477]: Starting Create User's Volatile Files and Directories...
Oct 10 10:02:38 compute-1 systemd[207477]: Finished Create User's Volatile Files and Directories.
Oct 10 10:02:38 compute-1 systemd[207477]: Listening on D-Bus User Message Bus Socket.
Oct 10 10:02:38 compute-1 systemd[207477]: Reached target Sockets.
Oct 10 10:02:38 compute-1 systemd[207477]: Reached target Basic System.
Oct 10 10:02:38 compute-1 systemd[207477]: Reached target Main User Target.
Oct 10 10:02:38 compute-1 systemd[207477]: Startup finished in 131ms.
Oct 10 10:02:38 compute-1 systemd[1]: Started User Manager for UID 0.
Oct 10 10:02:38 compute-1 systemd[1]: Started Session c3 of User root.
Oct 10 10:02:38 compute-1 sudo[207458]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 10 10:02:38 compute-1 iscsid[207451]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 10 10:02:38 compute-1 iscsid[207451]: INFO:__main__:Validating config file
Oct 10 10:02:38 compute-1 iscsid[207451]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 10 10:02:38 compute-1 iscsid[207451]: INFO:__main__:Writing out command to execute
Oct 10 10:02:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:38 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f708c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:38 compute-1 sudo[207458]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:38 compute-1 systemd[1]: session-c3.scope: Deactivated successfully.
Oct 10 10:02:38 compute-1 iscsid[207451]: ++ cat /run_command
Oct 10 10:02:38 compute-1 iscsid[207451]: + CMD='/usr/sbin/iscsid -f'
Oct 10 10:02:38 compute-1 iscsid[207451]: + ARGS=
Oct 10 10:02:38 compute-1 iscsid[207451]: + sudo kolla_copy_cacerts
Oct 10 10:02:38 compute-1 sudo[207520]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 10 10:02:38 compute-1 systemd[1]: Started Session c4 of User root.
Oct 10 10:02:38 compute-1 sudo[207520]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 10 10:02:38 compute-1 sudo[207520]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:38 compute-1 systemd[1]: session-c4.scope: Deactivated successfully.
Oct 10 10:02:38 compute-1 iscsid[207451]: Running command: '/usr/sbin/iscsid -f'
Oct 10 10:02:38 compute-1 iscsid[207451]: + [[ ! -n '' ]]
Oct 10 10:02:38 compute-1 iscsid[207451]: + . kolla_extend_start
Oct 10 10:02:38 compute-1 iscsid[207451]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct 10 10:02:38 compute-1 iscsid[207451]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct 10 10:02:38 compute-1 iscsid[207451]: + umask 0022
Oct 10 10:02:38 compute-1 iscsid[207451]: + exec /usr/sbin/iscsid -f
Oct 10 10:02:38 compute-1 kernel: Loading iSCSI transport class v2.0-870.
Oct 10 10:02:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:02:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:39.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:02:39 compute-1 python3.9[207655]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:02:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:39.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:39 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f709c001d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:39 compute-1 sudo[207806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axbrvvwjelinksrxdeusmxslfeohgtdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090559.3513148-1230-221742005246075/AnsiballZ_file.py'
Oct 10 10:02:39 compute-1 sudo[207806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:39 compute-1 ceph-mon[79167]: pgmap v493: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:02:39 compute-1 python3.9[207808]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:39 compute-1 sudo[207806]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:40 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:40 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70a4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:40 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:40 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70840016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:40 compute-1 sudo[207958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tllxvnyhvrzqydywkdjailytamqbejan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090560.4579823-1263-20892855311004/AnsiballZ_service_facts.py'
Oct 10 10:02:40 compute-1 sudo[207958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:40 compute-1 python3.9[207960]: ansible-ansible.builtin.service_facts Invoked
Oct 10 10:02:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:41.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:02:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:41.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:02:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:41 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f708c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:41 compute-1 ceph-mon[79167]: pgmap v494: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:02:42 compute-1 podman[207964]: 2025-10-10 10:02:42.030814883 +0000 UTC m=+0.119973754 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Oct 10 10:02:42 compute-1 network[208005]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 10:02:42 compute-1 network[208006]: 'network-scripts' will be removed from distribution in near future.
Oct 10 10:02:42 compute-1 network[208007]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 10:02:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:02:42.196 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:02:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:02:42.197 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:02:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:02:42.197 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:02:42 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:42 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f709c001d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:02:42 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:42 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70a4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:02:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:43.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:02:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:43.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:43 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7084001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:43 compute-1 ceph-mon[79167]: pgmap v495: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:02:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:44 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f708c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:44 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f709c001d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:45.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:45.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:45 compute-1 podman[208100]: 2025-10-10 10:02:45.394402261 +0000 UTC m=+0.067858355 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 10 10:02:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:45 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70a4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:45 compute-1 ceph-mon[79167]: pgmap v496: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:02:46 compute-1 sudo[207958]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:46 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:46 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7084001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:46 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:46 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7084001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:46 compute-1 sudo[208302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxjmqyxarshabegsvorhydypbrwqgvwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090566.4304695-1293-4574407587579/AnsiballZ_file.py'
Oct 10 10:02:46 compute-1 sudo[208302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:46 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:02:46 compute-1 python3.9[208304]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 10 10:02:46 compute-1 sudo[208302]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:47.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:02:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:02:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:47.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:02:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:47 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f709c0031e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:47 compute-1 sudo[208455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwbwhsjjhbexfyxgajgppbbmnqifevks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090567.198636-1317-8792563316108/AnsiballZ_modprobe.py'
Oct 10 10:02:47 compute-1 sudo[208455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:47 compute-1 ceph-mon[79167]: pgmap v497: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:02:47 compute-1 python3.9[208457]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct 10 10:02:47 compute-1 sudo[208455]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:48 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70a4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:48 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7084001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:48 compute-1 sudo[208611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spdxgudnbcsrtmqviscixekcmjjterno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090568.1351473-1341-241432994076533/AnsiballZ_stat.py'
Oct 10 10:02:48 compute-1 sudo[208611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:48 compute-1 systemd[1]: Stopping User Manager for UID 0...
Oct 10 10:02:48 compute-1 systemd[207477]: Activating special unit Exit the Session...
Oct 10 10:02:48 compute-1 systemd[207477]: Stopped target Main User Target.
Oct 10 10:02:48 compute-1 systemd[207477]: Stopped target Basic System.
Oct 10 10:02:48 compute-1 systemd[207477]: Stopped target Paths.
Oct 10 10:02:48 compute-1 systemd[207477]: Stopped target Sockets.
Oct 10 10:02:48 compute-1 systemd[207477]: Stopped target Timers.
Oct 10 10:02:48 compute-1 systemd[207477]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 10 10:02:48 compute-1 systemd[207477]: Closed D-Bus User Message Bus Socket.
Oct 10 10:02:48 compute-1 systemd[207477]: Stopped Create User's Volatile Files and Directories.
Oct 10 10:02:48 compute-1 systemd[207477]: Removed slice User Application Slice.
Oct 10 10:02:48 compute-1 systemd[207477]: Reached target Shutdown.
Oct 10 10:02:48 compute-1 systemd[207477]: Finished Exit the Session.
Oct 10 10:02:48 compute-1 systemd[207477]: Reached target Exit the Session.
Oct 10 10:02:48 compute-1 systemd[1]: user@0.service: Deactivated successfully.
Oct 10 10:02:48 compute-1 systemd[1]: Stopped User Manager for UID 0.
Oct 10 10:02:48 compute-1 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 10 10:02:48 compute-1 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 10 10:02:48 compute-1 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 10 10:02:48 compute-1 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 10 10:02:48 compute-1 systemd[1]: Removed slice User Slice of UID 0.
Oct 10 10:02:48 compute-1 python3.9[208613]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:02:48 compute-1 sudo[208611]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:02:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:49.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:02:49 compute-1 sudo[208735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grzhtwaafbbqkidtiuqqefjywyfkqqnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090568.1351473-1341-241432994076533/AnsiballZ_copy.py'
Oct 10 10:02:49 compute-1 sudo[208735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:02:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:49.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:02:49 compute-1 python3.9[208737]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090568.1351473-1341-241432994076533/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:49 compute-1 sudo[208735]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:49 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f708c002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:49 compute-1 ceph-mon[79167]: pgmap v498: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:02:50 compute-1 sudo[208888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulhqpyfvvohxwmifwoundspnplifntyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090569.7612092-1389-12278369217121/AnsiballZ_lineinfile.py'
Oct 10 10:02:50 compute-1 sudo[208888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:50 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f709c0031e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:50 compute-1 python3.9[208890]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:50 compute-1 sudo[208888]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:50 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70a4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:50 compute-1 sudo[209040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnfygfmificmcmaszmpjvbfivqnhmfgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090570.5810585-1413-160681631594518/AnsiballZ_systemd.py'
Oct 10 10:02:50 compute-1 sudo[209040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:02:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:51.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:02:51 compute-1 python3.9[209042]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 10:02:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:51.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:51 compute-1 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 10 10:02:51 compute-1 systemd[1]: Stopped Load Kernel Modules.
Oct 10 10:02:51 compute-1 systemd[1]: Stopping Load Kernel Modules...
Oct 10 10:02:51 compute-1 systemd[1]: Starting Load Kernel Modules...
Oct 10 10:02:51 compute-1 systemd[1]: Finished Load Kernel Modules.
Oct 10 10:02:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:51 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7084001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:51 compute-1 sudo[209040]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:51 compute-1 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct 10 10:02:51 compute-1 ceph-mon[79167]: pgmap v499: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:02:52 compute-1 sudo[209198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsqtdhtoqhbjuwnisxxkppmqorjqujfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090571.6961417-1437-233941992374416/AnsiballZ_file.py'
Oct 10 10:02:52 compute-1 sudo[209198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:52 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:52 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f708c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:52 compute-1 python3.9[209200]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:02:52 compute-1 sudo[209198]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:02:52 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:52 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f709c0031e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:52 compute-1 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 10 10:02:52 compute-1 sudo[209351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxmlzgkqbxhiqecpqzbtojurclpglfde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090572.6304767-1464-19979117413301/AnsiballZ_stat.py'
Oct 10 10:02:52 compute-1 sudo[209351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:02:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:53.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:02:53 compute-1 python3.9[209353]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:02:53 compute-1 sudo[209351]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:02:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:53.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:02:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:53 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70a4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:53 compute-1 ceph-mon[79167]: pgmap v500: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 10:02:53 compute-1 sudo[209504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snrxwafxezwwikyuigzpmlmixhenpfjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090573.5344834-1491-247235989105708/AnsiballZ_stat.py'
Oct 10 10:02:53 compute-1 sudo[209504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:54 compute-1 python3.9[209506]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:02:54 compute-1 sudo[209504]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:54 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:54 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7084003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:54 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:54 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f708c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:54 compute-1 sudo[209677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqhlcrmjdtamlokrfkhoncopyehhqqua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090574.3226275-1515-147727165027475/AnsiballZ_stat.py'
Oct 10 10:02:54 compute-1 sudo[209677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:54 compute-1 sudo[209637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:02:54 compute-1 sudo[209637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:02:54 compute-1 sudo[209637]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:54 compute-1 python3.9[209682]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:02:54 compute-1 sudo[209677]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:02:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:55.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:02:55 compute-1 sudo[209805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgwjcvcsejtokbkdogtyafdjpagtrcus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090574.3226275-1515-147727165027475/AnsiballZ_copy.py'
Oct 10 10:02:55 compute-1 sudo[209805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:55.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:55 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f709c0042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:55 compute-1 python3.9[209807]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090574.3226275-1515-147727165027475/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:55 compute-1 sudo[209805]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:55 compute-1 ceph-mon[79167]: pgmap v501: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:02:56 compute-1 sudo[209957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgaljhnmcsqatsmxayberqcuphglocyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090575.7994447-1560-22995038087270/AnsiballZ_command.py'
Oct 10 10:02:56 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:56 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70a4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:56 compute-1 sudo[209957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:56 compute-1 python3.9[209959]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:02:56 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:56 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7084003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:56 compute-1 sudo[209957]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:57 compute-1 sudo[210110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ramiaiosrfacvcmjvaphzwnukiszulmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090576.6902413-1584-58459541224535/AnsiballZ_lineinfile.py'
Oct 10 10:02:57 compute-1 sudo[210110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:57.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:57 compute-1 python3.9[210112]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:57 compute-1 sudo[210110]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:02:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:57.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:57 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f708c004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:57 compute-1 ceph-mon[79167]: pgmap v502: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:02:57 compute-1 sudo[210263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpodqoxqljeuhcixpldazdbponrdtorw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090577.469169-1608-49605905125398/AnsiballZ_replace.py'
Oct 10 10:02:57 compute-1 sudo[210263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:58 compute-1 python3.9[210265]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:58 compute-1 sudo[210263]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:58 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f709c0042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:58 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70a4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:58 compute-1 sudo[210415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hohunzrtpeqhqtejwzjbnujzlbzlhjit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090578.4232664-1632-165840274305585/AnsiballZ_replace.py'
Oct 10 10:02:58 compute-1 sudo[210415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:58 compute-1 python3.9[210417]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:59 compute-1 sudo[210415]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.003000082s ======
Oct 10 10:02:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:02:59.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000082s
Oct 10 10:02:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:02:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:02:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:02:59.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:02:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:02:59 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7084003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:02:59 compute-1 sudo[210568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-karshknahiglriwvyqnknheaoxskovtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090579.2628102-1659-40520369841038/AnsiballZ_lineinfile.py'
Oct 10 10:02:59 compute-1 sudo[210568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:02:59 compute-1 python3.9[210570]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:02:59 compute-1 sudo[210568]: pam_unix(sudo:session): session closed for user root
Oct 10 10:02:59 compute-1 ceph-mon[79167]: pgmap v503: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 10:03:00 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:00 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f708c004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:00 compute-1 sudo[210720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvkfathlvjztwmwwvjelbpkjumnpymat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090579.994245-1659-101483491049997/AnsiballZ_lineinfile.py'
Oct 10 10:03:00 compute-1 sudo[210720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:00 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:00 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f709c0042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:00 compute-1 python3.9[210722]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:00 compute-1 sudo[210720]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:01 compute-1 sudo[210872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mciiiizgqxcexdkxglutswrfxlljjkrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090580.747108-1659-167510277534744/AnsiballZ_lineinfile.py'
Oct 10 10:03:01 compute-1 sudo[210872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:01.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:01 compute-1 python3.9[210874]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:01 compute-1 sudo[210872]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:01.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:01 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70a4003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:01 compute-1 sudo[211025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbbfyfinndnrejtmobpmqcpsedsaylja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090581.4819014-1659-153806091793132/AnsiballZ_lineinfile.py'
Oct 10 10:03:01 compute-1 sudo[211025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:01 compute-1 ceph-mon[79167]: pgmap v504: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:03:01 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:03:02 compute-1 python3.9[211027]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:02 compute-1 sudo[211025]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:02 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7084003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:03:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:02 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f708c004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:02 compute-1 sudo[211177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmkpnemspsorkycqshzpmpbyittabrrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090582.2884035-1747-50540453034795/AnsiballZ_stat.py'
Oct 10 10:03:02 compute-1 sudo[211177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:02 compute-1 python3.9[211179]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:03:02 compute-1 sudo[211177]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:03.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:03.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:03 compute-1 sudo[211332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acodkzfspsarlgnzegmepcgvlvfwvwly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090583.092918-1770-235576003275817/AnsiballZ_file.py'
Oct 10 10:03:03 compute-1 sudo[211332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:03 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f709c0042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:03 compute-1 python3.9[211334]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:03 compute-1 sudo[211332]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:03 compute-1 ceph-mon[79167]: pgmap v505: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 10:03:04 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:04 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70a4003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:04 compute-1 sudo[211484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rplcntbggzfxpzagnmtorflgzwaoxkgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090584.0299234-1797-257587310644850/AnsiballZ_file.py'
Oct 10 10:03:04 compute-1 sudo[211484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:04 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:04 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7084003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:04 compute-1 python3.9[211486]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:03:04 compute-1 sudo[211484]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:04 compute-1 systemd[1]: virtqemud.service: Deactivated successfully.
Oct 10 10:03:04 compute-1 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 10 10:03:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:05.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:05 compute-1 sudo[211638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrtencrewduiknqhmnvfdhttlsgcsjyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090584.8512151-1821-189890071113936/AnsiballZ_stat.py'
Oct 10 10:03:05 compute-1 sudo[211638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:05.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:05 compute-1 python3.9[211640]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:03:05 compute-1 sudo[211638]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:05 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f708c004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:05 compute-1 sudo[211719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqdiqzmtgrmlqalznipsreubjgmlcafe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090584.8512151-1821-189890071113936/AnsiballZ_file.py'
Oct 10 10:03:05 compute-1 sudo[211719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:05 compute-1 python3.9[211721]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:03:05 compute-1 ceph-mon[79167]: pgmap v506: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:03:05 compute-1 sudo[211719]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:06 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f709c0042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:06 compute-1 sudo[211871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuaubmpkfktdybyajdcjucjbatxwbmlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090586.1259577-1821-197594308046487/AnsiballZ_stat.py'
Oct 10 10:03:06 compute-1 sudo[211871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:06 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7080000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:06 compute-1 python3.9[211873]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:03:06 compute-1 sudo[211871]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:06 compute-1 sudo[211949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvcjhmoahumiyhhqanwaayudtzfteaxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090586.1259577-1821-197594308046487/AnsiballZ_file.py'
Oct 10 10:03:06 compute-1 sudo[211949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:07.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:07 compute-1 python3.9[211951]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:03:07 compute-1 sudo[211949]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:03:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:07.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:07 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7084003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:07 compute-1 sudo[212102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agxkuyqxncpskakrfzjvokzezhquwjku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090587.4554434-1890-257210436693220/AnsiballZ_file.py'
Oct 10 10:03:07 compute-1 sudo[212102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:07 compute-1 ceph-mon[79167]: pgmap v507: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:03:08 compute-1 python3.9[212104]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:08 compute-1 sudo[212102]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:08 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70b0001340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:08 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f709c0042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:08 compute-1 sudo[212267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mypnieblbbkdcremsnazcnniajalgxbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090588.26963-1914-260079388760928/AnsiballZ_stat.py'
Oct 10 10:03:08 compute-1 sudo[212267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:08 compute-1 podman[212228]: 2025-10-10 10:03:08.664174894 +0000 UTC m=+0.107082340 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid)
Oct 10 10:03:08 compute-1 python3.9[212274]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:03:08 compute-1 sudo[212267]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:09.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:09 compute-1 sudo[212353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdmczxtanqoqezscgokwmvbjvnqpcktj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090588.26963-1914-260079388760928/AnsiballZ_file.py'
Oct 10 10:03:09 compute-1 sudo[212353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100309 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:03:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:03:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:09.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:03:09 compute-1 python3.9[212355]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:09 compute-1 sudo[212353]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:09 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:09 compute-1 sudo[212506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhjdabubeppefviajpemozcjvznacgnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090589.6348135-1950-245011718845237/AnsiballZ_stat.py'
Oct 10 10:03:09 compute-1 sudo[212506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:10 compute-1 ceph-mon[79167]: pgmap v508: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 10:03:10 compute-1 python3.9[212508]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:03:10 compute-1 sudo[212506]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:10 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7084003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:10 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70b0001340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:10 compute-1 sudo[212584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyemtgibkrwrapvbniowlgivxvijxyfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090589.6348135-1950-245011718845237/AnsiballZ_file.py'
Oct 10 10:03:10 compute-1 sudo[212584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:10 compute-1 python3.9[212586]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:10 compute-1 sudo[212584]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:11.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:11 compute-1 sudo[212737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rywmhdujxigdnuqwvzipolciyfhlryau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090590.974453-1986-119087906776031/AnsiballZ_systemd.py'
Oct 10 10:03:11 compute-1 sudo[212737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:11.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:11 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f709c0042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:11 compute-1 python3.9[212739]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:03:11 compute-1 systemd[1]: Reloading.
Oct 10 10:03:11 compute-1 systemd-rc-local-generator[212766]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:03:11 compute-1 systemd-sysv-generator[212771]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:03:12 compute-1 ceph-mon[79167]: pgmap v509: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:03:12 compute-1 sudo[212737]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:12 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:03:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:12 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7084003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:12 compute-1 sudo[212935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvgpkrxiglpvqiqnhhjcqwiplinctlgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090592.2555757-2010-228266349120531/AnsiballZ_stat.py'
Oct 10 10:03:12 compute-1 sudo[212935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:12 compute-1 podman[212900]: 2025-10-10 10:03:12.681242361 +0000 UTC m=+0.147347455 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 10 10:03:12 compute-1 python3.9[212943]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:03:12 compute-1 sudo[212935]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:13.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:13 compute-1 sudo[213027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqfprwqiambgeewekjronkmghdgzzcif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090592.2555757-2010-228266349120531/AnsiballZ_file.py'
Oct 10 10:03:13 compute-1 sudo[213027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:13.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:13 compute-1 python3.9[213029]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:13 compute-1 sudo[213027]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:13 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70b0001340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:03:13.833790) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090593833827, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1228, "num_deletes": 254, "total_data_size": 2976025, "memory_usage": 3016176, "flush_reason": "Manual Compaction"}
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090593851249, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 1967335, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18590, "largest_seqno": 19813, "table_properties": {"data_size": 1962011, "index_size": 2784, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 10656, "raw_average_key_size": 18, "raw_value_size": 1951470, "raw_average_value_size": 3387, "num_data_blocks": 125, "num_entries": 576, "num_filter_entries": 576, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760090484, "oldest_key_time": 1760090484, "file_creation_time": 1760090593, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 17569 microseconds, and 8336 cpu microseconds.
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:03:13.851357) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 1967335 bytes OK
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:03:13.851380) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:03:13.852809) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:03:13.852835) EVENT_LOG_v1 {"time_micros": 1760090593852828, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:03:13.852856) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 2970163, prev total WAL file size 2970163, number of live WAL files 2.
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:03:13.854877) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(1921KB)], [33(11MB)]
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090593854963, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 14184621, "oldest_snapshot_seqno": -1}
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 5005 keys, 13703013 bytes, temperature: kUnknown
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090593935501, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 13703013, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13668057, "index_size": 21342, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12549, "raw_key_size": 126978, "raw_average_key_size": 25, "raw_value_size": 13575811, "raw_average_value_size": 2712, "num_data_blocks": 878, "num_entries": 5005, "num_filter_entries": 5005, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760090593, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:03:13.935815) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 13703013 bytes
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:03:13.937374) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 175.9 rd, 169.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 11.7 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(14.2) write-amplify(7.0) OK, records in: 5527, records dropped: 522 output_compression: NoCompression
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:03:13.937406) EVENT_LOG_v1 {"time_micros": 1760090593937392, "job": 18, "event": "compaction_finished", "compaction_time_micros": 80652, "compaction_time_cpu_micros": 50102, "output_level": 6, "num_output_files": 1, "total_output_size": 13703013, "num_input_records": 5527, "num_output_records": 5005, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090593938480, "job": 18, "event": "table_file_deletion", "file_number": 35}
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090593942821, "job": 18, "event": "table_file_deletion", "file_number": 33}
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:03:13.854753) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:03:13.943008) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:03:13.943017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:03:13.943020) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:03:13.943023) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:03:13 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:03:13.943026) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:03:14 compute-1 sudo[213180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whjqtaheyujiwopzbpkzbwpbenakgktx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090593.6482112-2046-19166326833920/AnsiballZ_stat.py'
Oct 10 10:03:14 compute-1 sudo[213180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:14 compute-1 ceph-mon[79167]: pgmap v510: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 10:03:14 compute-1 python3.9[213182]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:03:14 compute-1 sudo[213180]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:14 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f709c0042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:14 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70800016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:14 compute-1 sudo[213258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaussxcflwijgehsfbdtqlivvkncbcit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090593.6482112-2046-19166326833920/AnsiballZ_file.py'
Oct 10 10:03:14 compute-1 sudo[213258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:14 compute-1 python3.9[213260]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:14 compute-1 sudo[213261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:03:14 compute-1 sudo[213261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:03:14 compute-1 sudo[213258]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:14 compute-1 sudo[213261]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:15.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:15 compute-1 sudo[213436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rygxhiyxfkvqrbjmmtczpkoeyqmobccw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090594.977409-2082-70053947598349/AnsiballZ_systemd.py'
Oct 10 10:03:15 compute-1 sudo[213436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:15.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:15 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7084003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:15 compute-1 python3.9[213438]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:03:15 compute-1 systemd[1]: Reloading.
Oct 10 10:03:15 compute-1 systemd-rc-local-generator[213486]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:03:15 compute-1 podman[213440]: 2025-10-10 10:03:15.761525271 +0000 UTC m=+0.084018177 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 10 10:03:15 compute-1 systemd-sysv-generator[213490]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:03:16 compute-1 ceph-mon[79167]: pgmap v511: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:03:16 compute-1 systemd[1]: Starting Create netns directory...
Oct 10 10:03:16 compute-1 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 10 10:03:16 compute-1 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 10 10:03:16 compute-1 systemd[1]: Finished Create netns directory.
Oct 10 10:03:16 compute-1 sudo[213436]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:16 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:16 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7084003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:16 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:16 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f709c0042e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:16 compute-1 sudo[213649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qemfezdmmdwtbbkkzcfdznhxipizpwvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090596.5124075-2112-114728551952043/AnsiballZ_file.py'
Oct 10 10:03:16 compute-1 sudo[213649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:17 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:03:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:17.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:17 compute-1 python3.9[213651]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:03:17 compute-1 sudo[213649]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:03:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:17.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:17 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7080002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:17 compute-1 sudo[213802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvrtpxmnjfhlfjaisxzkhyonnnuicwoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090597.4055212-2136-76429710949211/AnsiballZ_stat.py'
Oct 10 10:03:17 compute-1 sudo[213802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:18 compute-1 python3.9[213804]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:03:18 compute-1 sudo[213802]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:18 compute-1 ceph-mon[79167]: pgmap v512: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:03:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:18 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f7084003ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:18 compute-1 sudo[213925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hygbznqvafjbqoqygzmcwkwisbkhzhmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090597.4055212-2136-76429710949211/AnsiballZ_copy.py'
Oct 10 10:03:18 compute-1 sudo[213925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:18 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f70b0008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:18 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:03:18 compute-1 python3.9[213927]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090597.4055212-2136-76429710949211/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:03:18 compute-1 sudo[213925]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:19.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:19.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[204996]: 10/10/2025 10:03:19 : epoch 68e8d9ad : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f709c0042e0 fd 38 proxy ignored for local
Oct 10 10:03:19 compute-1 kernel: ganesha.nfsd[206791]: segfault at 50 ip 00007f715963e32e sp 00007f711affc210 error 4 in libntirpc.so.5.8[7f7159623000+2c000] likely on CPU 2 (core 0, socket 2)
Oct 10 10:03:19 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 10:03:19 compute-1 systemd[1]: Started Process Core Dump (PID 214041/UID 0).
Oct 10 10:03:19 compute-1 sudo[214080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cohqzminjxkrgwqlzewzyxmvvegyahxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090599.217856-2188-114212058947129/AnsiballZ_file.py'
Oct 10 10:03:19 compute-1 sudo[214080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:19 compute-1 python3.9[214082]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:03:19 compute-1 sudo[214080]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:20 compute-1 ceph-mon[79167]: pgmap v513: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:03:20 compute-1 sudo[214232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isvrwlpjmtglvbhfyhtmiccnkhbxjncl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090600.1084583-2211-231822840879581/AnsiballZ_stat.py'
Oct 10 10:03:20 compute-1 sudo[214232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:20 compute-1 systemd-coredump[214053]: Process 205003 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 45:
                                                    #0  0x00007f715963e32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 10 10:03:20 compute-1 python3.9[214234]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:03:20 compute-1 sudo[214232]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:20 compute-1 systemd[1]: systemd-coredump@7-214041-0.service: Deactivated successfully.
Oct 10 10:03:20 compute-1 systemd[1]: systemd-coredump@7-214041-0.service: Consumed 1.133s CPU time.
Oct 10 10:03:20 compute-1 podman[214239]: 2025-10-10 10:03:20.792118111 +0000 UTC m=+0.051978608 container died 4717a0fb642c4e89fc048e37c1ba16280d16bc6e7bc4d2c9d3fc1b766a49154e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:03:20 compute-1 systemd[1]: var-lib-containers-storage-overlay-c38e0c120a6bc62a555c2bf449eb1333a41471a8f274a6f9e21cf4b9732a074c-merged.mount: Deactivated successfully.
Oct 10 10:03:20 compute-1 podman[214239]: 2025-10-10 10:03:20.844847438 +0000 UTC m=+0.104707885 container remove 4717a0fb642c4e89fc048e37c1ba16280d16bc6e7bc4d2c9d3fc1b766a49154e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 10 10:03:20 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Main process exited, code=exited, status=139/n/a
Oct 10 10:03:21 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Failed with result 'exit-code'.
Oct 10 10:03:21 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 1.831s CPU time.
Oct 10 10:03:21 compute-1 sudo[214402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmgierymyymvtvmdvfuqribjcpkogjzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090600.1084583-2211-231822840879581/AnsiballZ_copy.py'
Oct 10 10:03:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:21.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:21 compute-1 sudo[214402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:21 compute-1 python3.9[214404]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090600.1084583-2211-231822840879581/.source.json _original_basename=.1ag5jklu follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:21 compute-1 sudo[214402]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:03:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:21.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:03:21 compute-1 sudo[214556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdgpadvbgdbvogqnjlgnpzmqmhrjyjbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090601.5349004-2256-212360212301856/AnsiballZ_file.py'
Oct 10 10:03:21 compute-1 sudo[214556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:22 compute-1 ceph-mon[79167]: pgmap v514: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:03:22 compute-1 python3.9[214558]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:22 compute-1 sudo[214556]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:03:22 compute-1 sudo[214708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iajplvaypdfzpbsxonxlkdzrkisivsmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090602.5102937-2280-207776927654686/AnsiballZ_stat.py'
Oct 10 10:03:22 compute-1 sudo[214708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:23 compute-1 sudo[214708]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:23.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:23 compute-1 ceph-mon[79167]: pgmap v515: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 852 B/s wr, 2 op/s
Oct 10 10:03:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:03:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:23.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:03:23 compute-1 sudo[214832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jogrzfnyldfnwabocrhbyxxddovqzbfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090602.5102937-2280-207776927654686/AnsiballZ_copy.py'
Oct 10 10:03:23 compute-1 sudo[214832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:23 compute-1 sudo[214832]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:24 compute-1 sudo[214984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyupjcvzgkcrwcaafnobfqmzfkoktsow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090604.1482432-2331-170160649750085/AnsiballZ_container_config_data.py'
Oct 10 10:03:24 compute-1 sudo[214984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:24 compute-1 python3.9[214986]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct 10 10:03:24 compute-1 sudo[214984]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:25.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:25.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100325 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:03:25 compute-1 sudo[215158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqxomufduiojmpothyedasecciazaudn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090605.1018014-2358-173545562869401/AnsiballZ_container_config_hash.py'
Oct 10 10:03:25 compute-1 sudo[215158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:25 compute-1 sudo[215118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:03:25 compute-1 sudo[215118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:03:25 compute-1 sudo[215118]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:25 compute-1 sudo[215165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:03:25 compute-1 sudo[215165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:03:25 compute-1 ceph-mon[79167]: pgmap v516: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 852 B/s wr, 2 op/s
Oct 10 10:03:25 compute-1 python3.9[215163]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 10 10:03:25 compute-1 sudo[215158]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:26 compute-1 sudo[215165]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:26 compute-1 sudo[215371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxglybjrjpdmrlrfyxrdasmcmlnxyfgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090606.0470455-2385-217922800548385/AnsiballZ_podman_container_info.py'
Oct 10 10:03:26 compute-1 sudo[215371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:26 compute-1 python3.9[215373]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 10 10:03:26 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:03:26 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:03:26 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:03:26 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:03:26 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:03:26 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:03:26 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:03:26 compute-1 sudo[215371]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:27.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:03:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:27.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:27 compute-1 ceph-mon[79167]: pgmap v517: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 852 B/s wr, 2 op/s
Oct 10 10:03:28 compute-1 sudo[215551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcduljidivjmaiossymvfvjuqwexpfbv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760090608.1319947-2424-222018998987196/AnsiballZ_edpm_container_manage.py'
Oct 10 10:03:28 compute-1 sudo[215551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:28 compute-1 python3[215553]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 10 10:03:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:03:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:29.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:03:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:29.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:29 compute-1 ceph-mon[79167]: pgmap v518: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:03:30 compute-1 podman[215567]: 2025-10-10 10:03:30.122783997 +0000 UTC m=+1.277202824 image pull f541ff382622bd8bc9ad206129d2a8e74c239ff4503fa3b67d3bdf6d5b50b511 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct 10 10:03:30 compute-1 podman[215625]: 2025-10-10 10:03:30.308976178 +0000 UTC m=+0.071672178 container create b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 10 10:03:30 compute-1 podman[215625]: 2025-10-10 10:03:30.273392921 +0000 UTC m=+0.036088951 image pull f541ff382622bd8bc9ad206129d2a8e74c239ff4503fa3b67d3bdf6d5b50b511 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct 10 10:03:30 compute-1 python3[215553]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct 10 10:03:30 compute-1 sudo[215551]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:31 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Scheduled restart job, restart counter is at 8.
Oct 10 10:03:31 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:03:31 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 1.831s CPU time.
Oct 10 10:03:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:03:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:31.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:03:31 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 10:03:31 compute-1 sudo[215814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-senksuthsfysxpesbwfxwrgbaospfmba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090610.741467-2448-239069998675962/AnsiballZ_stat.py'
Oct 10 10:03:31 compute-1 sudo[215814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100331 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:03:31 compute-1 python3.9[215818]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:03:31 compute-1 sudo[215814]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:31.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:31 compute-1 podman[215879]: 2025-10-10 10:03:31.475965354 +0000 UTC m=+0.046119527 container create 37e8592f37054ae63e3280e5a3a91716481e4ea058c2c324979351a49841a1a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 10 10:03:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac92ad902689e585fa54275be7aa322ccd3fe3e1d273fc9cc6be5c234ab6b30/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 10:03:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac92ad902689e585fa54275be7aa322ccd3fe3e1d273fc9cc6be5c234ab6b30/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:03:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac92ad902689e585fa54275be7aa322ccd3fe3e1d273fc9cc6be5c234ab6b30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:03:31 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac92ad902689e585fa54275be7aa322ccd3fe3e1d273fc9cc6be5c234ab6b30/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.mssvzx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:03:31 compute-1 podman[215879]: 2025-10-10 10:03:31.53556662 +0000 UTC m=+0.105720803 container init 37e8592f37054ae63e3280e5a3a91716481e4ea058c2c324979351a49841a1a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 10 10:03:31 compute-1 podman[215879]: 2025-10-10 10:03:31.542029268 +0000 UTC m=+0.112183441 container start 37e8592f37054ae63e3280e5a3a91716481e4ea058c2c324979351a49841a1a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 10 10:03:31 compute-1 bash[215879]: 37e8592f37054ae63e3280e5a3a91716481e4ea058c2c324979351a49841a1a8
Oct 10 10:03:31 compute-1 podman[215879]: 2025-10-10 10:03:31.458566167 +0000 UTC m=+0.028720360 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:03:31 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:03:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:31 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 10:03:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:31 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 10:03:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:31 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 10:03:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:31 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 10:03:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:31 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 10:03:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:31 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 10:03:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:31 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 10:03:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:31 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:03:31 compute-1 ceph-mon[79167]: pgmap v519: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 852 B/s wr, 2 op/s
Oct 10 10:03:31 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:03:32 compute-1 sudo[216072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaxxidvqurymvnymmsmosvqgcbfdqxzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090611.6630602-2475-160932346188481/AnsiballZ_file.py'
Oct 10 10:03:32 compute-1 sudo[216072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:32 compute-1 python3.9[216074]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:32 compute-1 sudo[216072]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:32 compute-1 sudo[216075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:03:32 compute-1 sudo[216075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:03:32 compute-1 sudo[216075]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:03:32 compute-1 sudo[216173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsdovjklnthwrtcnoyiaaqtymqllcqgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090611.6630602-2475-160932346188481/AnsiballZ_stat.py'
Oct 10 10:03:32 compute-1 sudo[216173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:32 compute-1 python3.9[216175]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:03:32 compute-1 sudo[216173]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:33 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:03:33 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:03:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:33.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:33 compute-1 sudo[216325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lckvywykyefbpanlfxfuirujmqxrhucd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090612.799489-2475-35260936063103/AnsiballZ_copy.py'
Oct 10 10:03:33 compute-1 sudo[216325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:03:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:33.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:03:33 compute-1 python3.9[216327]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760090612.799489-2475-35260936063103/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:33 compute-1 sudo[216325]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:33 compute-1 sudo[216401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbhyjhxvxbuztcpwzvwwfxjndsqzjbqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090612.799489-2475-35260936063103/AnsiballZ_systemd.py'
Oct 10 10:03:33 compute-1 sudo[216401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:34 compute-1 ceph-mon[79167]: pgmap v520: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:03:34 compute-1 python3.9[216403]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 10:03:34 compute-1 systemd[1]: Reloading.
Oct 10 10:03:34 compute-1 systemd-rc-local-generator[216426]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:03:34 compute-1 systemd-sysv-generator[216433]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:03:34 compute-1 sudo[216401]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:34 compute-1 sudo[216486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:03:34 compute-1 sudo[216486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:03:34 compute-1 sudo[216486]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:34 compute-1 sudo[216537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auyvmsroiazpvvktkygiudonemimtjsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090612.799489-2475-35260936063103/AnsiballZ_systemd.py'
Oct 10 10:03:34 compute-1 sudo[216537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:35 compute-1 ceph-mon[79167]: pgmap v521: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 170 B/s wr, 1 op/s
Oct 10 10:03:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:35.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:35 compute-1 python3.9[216539]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:03:35 compute-1 systemd[1]: Reloading.
Oct 10 10:03:35 compute-1 systemd-rc-local-generator[216573]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:03:35 compute-1 systemd-sysv-generator[216577]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:03:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:35.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:35 compute-1 systemd[1]: Starting multipathd container...
Oct 10 10:03:35 compute-1 systemd[1]: Started libcrun container.
Oct 10 10:03:35 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e303f44d45903bae5712811eb68960bc5a6f532ea353207d00dcafe64f97e977/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 10 10:03:35 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e303f44d45903bae5712811eb68960bc5a6f532ea353207d00dcafe64f97e977/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 10 10:03:35 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad.
Oct 10 10:03:35 compute-1 podman[216582]: 2025-10-10 10:03:35.822585987 +0000 UTC m=+0.161435703 container init b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 10:03:35 compute-1 multipathd[216598]: + sudo -E kolla_set_configs
Oct 10 10:03:35 compute-1 podman[216582]: 2025-10-10 10:03:35.860339993 +0000 UTC m=+0.199189599 container start b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 10:03:35 compute-1 podman[216582]: multipathd
Oct 10 10:03:35 compute-1 sudo[216604]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 10 10:03:35 compute-1 sudo[216604]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 10 10:03:35 compute-1 sudo[216604]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 10 10:03:35 compute-1 systemd[1]: Started multipathd container.
Oct 10 10:03:35 compute-1 sudo[216537]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:35 compute-1 multipathd[216598]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 10 10:03:35 compute-1 multipathd[216598]: INFO:__main__:Validating config file
Oct 10 10:03:35 compute-1 multipathd[216598]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 10 10:03:35 compute-1 multipathd[216598]: INFO:__main__:Writing out command to execute
Oct 10 10:03:35 compute-1 sudo[216604]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:35 compute-1 multipathd[216598]: ++ cat /run_command
Oct 10 10:03:35 compute-1 multipathd[216598]: + CMD='/usr/sbin/multipathd -d'
Oct 10 10:03:35 compute-1 multipathd[216598]: + ARGS=
Oct 10 10:03:35 compute-1 multipathd[216598]: + sudo kolla_copy_cacerts
Oct 10 10:03:35 compute-1 sudo[216628]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 10 10:03:35 compute-1 sudo[216628]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 10 10:03:35 compute-1 sudo[216628]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 10 10:03:35 compute-1 sudo[216628]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:35 compute-1 multipathd[216598]: + [[ ! -n '' ]]
Oct 10 10:03:35 compute-1 multipathd[216598]: + . kolla_extend_start
Oct 10 10:03:35 compute-1 multipathd[216598]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 10 10:03:35 compute-1 multipathd[216598]: Running command: '/usr/sbin/multipathd -d'
Oct 10 10:03:35 compute-1 multipathd[216598]: + umask 0022
Oct 10 10:03:35 compute-1 multipathd[216598]: + exec /usr/sbin/multipathd -d
Oct 10 10:03:36 compute-1 podman[216605]: 2025-10-10 10:03:35.999836383 +0000 UTC m=+0.121324152 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:03:36 compute-1 multipathd[216598]: 3529.769287 | --------start up--------
Oct 10 10:03:36 compute-1 multipathd[216598]: 3529.769315 | read /etc/multipath.conf
Oct 10 10:03:36 compute-1 systemd[1]: b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad-7873cd124ed0a0b8.service: Main process exited, code=exited, status=1/FAILURE
Oct 10 10:03:36 compute-1 systemd[1]: b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad-7873cd124ed0a0b8.service: Failed with result 'exit-code'.
Oct 10 10:03:36 compute-1 multipathd[216598]: 3529.775778 | path checkers start up
Oct 10 10:03:36 compute-1 python3.9[216786]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:03:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:37.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:37 compute-1 sudo[216938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zghvefqohoygqyrapjwwnidgtsitpazc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090616.9559226-2583-125435100149592/AnsiballZ_command.py'
Oct 10 10:03:37 compute-1 sudo[216938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:03:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:37.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:37 compute-1 python3.9[216940]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:03:37 compute-1 sudo[216938]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:37 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:03:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:37 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:03:37 compute-1 ceph-mon[79167]: pgmap v522: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 170 B/s wr, 1 op/s
Oct 10 10:03:38 compute-1 sudo[217104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swojufvguaracpcpmhkuflrcdtrdzixs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090617.8562572-2607-262433649606804/AnsiballZ_systemd.py'
Oct 10 10:03:38 compute-1 sudo[217104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:38 compute-1 python3.9[217106]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 10:03:38 compute-1 systemd[1]: Stopping multipathd container...
Oct 10 10:03:38 compute-1 multipathd[216598]: 3532.454241 | exit (signal)
Oct 10 10:03:38 compute-1 multipathd[216598]: 3532.454294 | --------shut down-------
Oct 10 10:03:38 compute-1 systemd[1]: libpod-b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad.scope: Deactivated successfully.
Oct 10 10:03:38 compute-1 podman[217110]: 2025-10-10 10:03:38.721116097 +0000 UTC m=+0.072063088 container died b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 10 10:03:38 compute-1 systemd[1]: b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad-7873cd124ed0a0b8.timer: Deactivated successfully.
Oct 10 10:03:38 compute-1 systemd[1]: Stopped /usr/bin/podman healthcheck run b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad.
Oct 10 10:03:38 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad-userdata-shm.mount: Deactivated successfully.
Oct 10 10:03:38 compute-1 systemd[1]: var-lib-containers-storage-overlay-e303f44d45903bae5712811eb68960bc5a6f532ea353207d00dcafe64f97e977-merged.mount: Deactivated successfully.
Oct 10 10:03:38 compute-1 podman[217125]: 2025-10-10 10:03:38.828734742 +0000 UTC m=+0.074920438 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 10 10:03:38 compute-1 podman[217110]: 2025-10-10 10:03:38.91462568 +0000 UTC m=+0.265572711 container cleanup b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 10 10:03:38 compute-1 podman[217110]: multipathd
Oct 10 10:03:39 compute-1 podman[217159]: multipathd
Oct 10 10:03:39 compute-1 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct 10 10:03:39 compute-1 systemd[1]: Stopped multipathd container.
Oct 10 10:03:39 compute-1 systemd[1]: Starting multipathd container...
Oct 10 10:03:39 compute-1 systemd[1]: Started libcrun container.
Oct 10 10:03:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e303f44d45903bae5712811eb68960bc5a6f532ea353207d00dcafe64f97e977/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 10 10:03:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:39.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e303f44d45903bae5712811eb68960bc5a6f532ea353207d00dcafe64f97e977/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 10 10:03:39 compute-1 systemd[1]: Started /usr/bin/podman healthcheck run b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad.
Oct 10 10:03:39 compute-1 podman[217173]: 2025-10-10 10:03:39.199468799 +0000 UTC m=+0.157996358 container init b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 10:03:39 compute-1 multipathd[217189]: + sudo -E kolla_set_configs
Oct 10 10:03:39 compute-1 podman[217173]: 2025-10-10 10:03:39.232938938 +0000 UTC m=+0.191466467 container start b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 10:03:39 compute-1 sudo[217195]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 10 10:03:39 compute-1 podman[217173]: multipathd
Oct 10 10:03:39 compute-1 sudo[217195]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 10 10:03:39 compute-1 sudo[217195]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 10 10:03:39 compute-1 systemd[1]: Started multipathd container.
Oct 10 10:03:39 compute-1 sudo[217104]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:39 compute-1 multipathd[217189]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 10 10:03:39 compute-1 multipathd[217189]: INFO:__main__:Validating config file
Oct 10 10:03:39 compute-1 multipathd[217189]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 10 10:03:39 compute-1 multipathd[217189]: INFO:__main__:Writing out command to execute
Oct 10 10:03:39 compute-1 sudo[217195]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:39 compute-1 multipathd[217189]: ++ cat /run_command
Oct 10 10:03:39 compute-1 multipathd[217189]: + CMD='/usr/sbin/multipathd -d'
Oct 10 10:03:39 compute-1 multipathd[217189]: + ARGS=
Oct 10 10:03:39 compute-1 multipathd[217189]: + sudo kolla_copy_cacerts
Oct 10 10:03:39 compute-1 sudo[217230]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 10 10:03:39 compute-1 sudo[217230]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 10 10:03:39 compute-1 sudo[217230]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 10 10:03:39 compute-1 sudo[217230]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:39 compute-1 multipathd[217189]: + [[ ! -n '' ]]
Oct 10 10:03:39 compute-1 multipathd[217189]: + . kolla_extend_start
Oct 10 10:03:39 compute-1 multipathd[217189]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 10 10:03:39 compute-1 multipathd[217189]: Running command: '/usr/sbin/multipathd -d'
Oct 10 10:03:39 compute-1 multipathd[217189]: + umask 0022
Oct 10 10:03:39 compute-1 multipathd[217189]: + exec /usr/sbin/multipathd -d
Oct 10 10:03:39 compute-1 podman[217196]: 2025-10-10 10:03:39.345010765 +0000 UTC m=+0.095369529 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 10 10:03:39 compute-1 systemd[1]: b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad-42fb34184ad969d6.service: Main process exited, code=exited, status=1/FAILURE
Oct 10 10:03:39 compute-1 systemd[1]: b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad-42fb34184ad969d6.service: Failed with result 'exit-code'.
Oct 10 10:03:39 compute-1 multipathd[217189]: 3533.121084 | --------start up--------
Oct 10 10:03:39 compute-1 multipathd[217189]: 3533.121102 | read /etc/multipath.conf
Oct 10 10:03:39 compute-1 multipathd[217189]: 3533.126628 | path checkers start up
Oct 10 10:03:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:03:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:39.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:03:39 compute-1 sudo[217381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rozqkliunmtafjuzmqsxiesjazgtuntu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090619.5205617-2631-171181066536765/AnsiballZ_file.py'
Oct 10 10:03:39 compute-1 sudo[217381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:39 compute-1 ceph-mon[79167]: pgmap v523: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 682 B/s wr, 2 op/s
Oct 10 10:03:40 compute-1 python3.9[217383]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:40 compute-1 sudo[217381]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:40 compute-1 sudo[217533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uscajlaagcsbbifrsayecjppiwxahkfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090620.6403537-2668-102671458339753/AnsiballZ_file.py'
Oct 10 10:03:40 compute-1 sudo[217533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:41.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:41 compute-1 python3.9[217535]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 10 10:03:41 compute-1 sudo[217533]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:41.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:41 compute-1 sudo[217686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzbsxdihezcfupmbgilhvndhrtvsdugg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090621.460292-2691-24756047734490/AnsiballZ_modprobe.py'
Oct 10 10:03:41 compute-1 sudo[217686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:42 compute-1 ceph-mon[79167]: pgmap v524: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 10:03:42 compute-1 python3.9[217688]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct 10 10:03:42 compute-1 kernel: Key type psk registered
Oct 10 10:03:42 compute-1 sudo[217686]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:03:42.196 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:03:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:03:42.197 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:03:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:03:42.197 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:03:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:03:42 compute-1 sudo[217849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrydvkerdqrrodswmgmzuiwdaldfzscr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090622.375381-2715-91983414865060/AnsiballZ_stat.py'
Oct 10 10:03:42 compute-1 sudo[217849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:42 compute-1 podman[217851]: 2025-10-10 10:03:42.889875279 +0000 UTC m=+0.131425439 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Oct 10 10:03:42 compute-1 python3.9[217852]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:03:42 compute-1 sudo[217849]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:43.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:43 compute-1 sudo[217999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvlgwecvqfridicuzrtdjaijcusvjgrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090622.375381-2715-91983414865060/AnsiballZ_copy.py'
Oct 10 10:03:43 compute-1 sudo[217999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:43.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:43 compute-1 python3.9[218001]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760090622.375381-2715-91983414865060/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:43 compute-1 sudo[217999]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:03:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 10:03:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 10:03:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 10:03:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 10:03:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 10:03:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 10:03:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:03:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:03:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:03:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 10:03:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:03:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 10:03:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 10:03:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 10:03:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 10:03:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 10:03:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 10:03:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 10:03:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 10:03:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 10:03:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 10:03:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 10:03:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 10:03:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 10:03:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 10:03:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:43 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 10:03:44 compute-1 ceph-mon[79167]: pgmap v525: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:03:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:44 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a54000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:44 compute-1 sudo[218165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgzgbbjdcdigskixaeqmirwhejpyyrfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090623.9416804-2763-84764292626388/AnsiballZ_lineinfile.py'
Oct 10 10:03:44 compute-1 sudo[218165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:44 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:44 compute-1 python3.9[218167]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:44 compute-1 sudo[218165]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:45 compute-1 sudo[218317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebrchmgjegqcpypqfvxkgjggbbkkcnxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090624.7902322-2787-17036812029953/AnsiballZ_systemd.py'
Oct 10 10:03:45 compute-1 sudo[218317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:03:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:45.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:03:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:45.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:45 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:45 compute-1 python3.9[218319]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 10:03:45 compute-1 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 10 10:03:45 compute-1 systemd[1]: Stopped Load Kernel Modules.
Oct 10 10:03:45 compute-1 systemd[1]: Stopping Load Kernel Modules...
Oct 10 10:03:45 compute-1 systemd[1]: Starting Load Kernel Modules...
Oct 10 10:03:45 compute-1 systemd[1]: Finished Load Kernel Modules.
Oct 10 10:03:45 compute-1 sudo[218317]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:46 compute-1 ceph-mon[79167]: pgmap v526: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:03:46 compute-1 sudo[218488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtfrdmnukspklzgnabxrxwdozxeaecmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090625.8912258-2811-27555140280290/AnsiballZ_setup.py'
Oct 10 10:03:46 compute-1 sudo[218488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:46 compute-1 podman[218448]: 2025-10-10 10:03:46.288985952 +0000 UTC m=+0.106797483 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 10 10:03:46 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:46 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a48001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:46 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:46 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a3c000d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:46 compute-1 python3.9[218497]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 10 10:03:46 compute-1 sudo[218488]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:47 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:03:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:03:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:47.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:03:47 compute-1 sudo[218579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aktfwujteoolpabfbmqrsmzjyhpknudg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090625.8912258-2811-27555140280290/AnsiballZ_dnf.py'
Oct 10 10:03:47 compute-1 sudo[218579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:03:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:47.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100347 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:03:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:47 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:47 compute-1 python3.9[218581]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 10 10:03:48 compute-1 ceph-mon[79167]: pgmap v527: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:03:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:48 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:48 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a48002720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:03:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:49.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:03:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:03:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:49.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:03:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:49 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a3c001820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:50 compute-1 ceph-mon[79167]: pgmap v528: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:03:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:50 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:50 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:51.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:03:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:51.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:03:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:51 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a48002720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:52 compute-1 ceph-mon[79167]: pgmap v529: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:03:52 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:52 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a3c001820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:03:52 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:52 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:03:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:53.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:03:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:03:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:53.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:03:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:53 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:53 compute-1 systemd[1]: Reloading.
Oct 10 10:03:53 compute-1 systemd-rc-local-generator[218619]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:03:53 compute-1 systemd-sysv-generator[218623]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:03:54 compute-1 systemd[1]: Reloading.
Oct 10 10:03:54 compute-1 ceph-mon[79167]: pgmap v530: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:03:54 compute-1 systemd-sysv-generator[218650]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:03:54 compute-1 systemd-rc-local-generator[218647]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:03:54 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:54 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a48002720 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:54 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:54 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a3c001820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:54 compute-1 systemd-logind[789]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 10 10:03:54 compute-1 systemd-logind[789]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 10 10:03:54 compute-1 lvm[218698]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 10:03:54 compute-1 lvm[218698]: VG ceph_vg0 finished
Oct 10 10:03:54 compute-1 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 10 10:03:54 compute-1 systemd[1]: Starting man-db-cache-update.service...
Oct 10 10:03:54 compute-1 systemd[1]: Reloading.
Oct 10 10:03:54 compute-1 systemd-sysv-generator[218753]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:03:54 compute-1 systemd-rc-local-generator[218746]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:03:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:03:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:55.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:03:55 compute-1 sudo[218766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:03:55 compute-1 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 10 10:03:55 compute-1 sudo[218766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:03:55 compute-1 sudo[218766]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:55.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:55 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:55 compute-1 sudo[218579]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:56 compute-1 ceph-mon[79167]: pgmap v531: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:03:56 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:56 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a440016e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:56 compute-1 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 10 10:03:56 compute-1 systemd[1]: Finished man-db-cache-update.service.
Oct 10 10:03:56 compute-1 systemd[1]: man-db-cache-update.service: Consumed 1.761s CPU time.
Oct 10 10:03:56 compute-1 systemd[1]: run-r79cd6aa339d64d0c98f1ab76bd6f0e65.service: Deactivated successfully.
Oct 10 10:03:56 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:56 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a48003bb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:57 compute-1 ceph-mon[79167]: pgmap v532: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:03:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:57.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:57 compute-1 sudo[220064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnjehuldcfxmxqqimrhsoooydvsmufta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090636.908378-2847-18089209560771/AnsiballZ_file.py'
Oct 10 10:03:57 compute-1 sudo[220064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:03:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:57.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:57 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a3c002cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:57 compute-1 python3.9[220066]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:57 compute-1 sudo[220064]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:58 compute-1 python3.9[220217]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 10 10:03:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:58 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:58 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:59 compute-1 sudo[220371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovhqqishcydljwjmczehfvynrvecxmbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090638.8125691-2899-229982958489849/AnsiballZ_file.py'
Oct 10 10:03:59 compute-1 sudo[220371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:03:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:03:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:03:59.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:03:59 compute-1 python3.9[220373]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:03:59 compute-1 sudo[220371]: pam_unix(sudo:session): session closed for user root
Oct 10 10:03:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:03:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:03:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:03:59.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:03:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:03:59 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a48003bb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:03:59 compute-1 ceph-mon[79167]: pgmap v533: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:04:00 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:00 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a3c002cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:00 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100400 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:04:00 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:00 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a440021e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:00 compute-1 sudo[220524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffsfhavdgzadlvmjrtwrdpcrcbyajqiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090639.9448783-2933-106049570714189/AnsiballZ_systemd_service.py'
Oct 10 10:04:00 compute-1 sudo[220524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:00 compute-1 python3.9[220526]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 10:04:01 compute-1 systemd[1]: Reloading.
Oct 10 10:04:01 compute-1 systemd-sysv-generator[220559]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:04:01 compute-1 systemd-rc-local-generator[220555]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:04:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:04:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:01.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:04:01 compute-1 sudo[220524]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:04:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:01.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:04:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:01 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50003d80 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:01 compute-1 ceph-mon[79167]: pgmap v534: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:04:01 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:04:02 compute-1 python3.9[220712]: ansible-ansible.builtin.service_facts Invoked
Oct 10 10:04:02 compute-1 network[220729]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 10 10:04:02 compute-1 network[220730]: 'network-scripts' will be removed from distribution in near future.
Oct 10 10:04:02 compute-1 network[220731]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 10 10:04:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:02 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a48003bb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:04:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:02 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a3c002cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 10:04:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:03.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 10:04:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:03.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:03 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a440021e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:03 compute-1 ceph-mon[79167]: pgmap v535: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 10:04:04 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:04 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50003d80 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:04 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:04 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a48003bb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:05.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:05.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:05 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a3c002cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:05 compute-1 ceph-mon[79167]: pgmap v536: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:04:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:06 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a440021e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:06 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50003d80 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:07.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:04:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:04:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:07.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:04:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:07 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a48003bb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:07 compute-1 ceph-mon[79167]: pgmap v537: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:04:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:08 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a3c002cb0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:08 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a440021e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:09 compute-1 sudo[221026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dacqzkywxlktalgbncotglfsbbelabhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090648.6859937-2989-274420485745515/AnsiballZ_systemd_service.py'
Oct 10 10:04:09 compute-1 podman[220984]: 2025-10-10 10:04:09.071036863 +0000 UTC m=+0.078282484 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 10 10:04:09 compute-1 sudo[221026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:09.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:09 compute-1 python3.9[221031]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:04:09 compute-1 sudo[221026]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:09.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:09 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50003d80 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:09 compute-1 podman[221034]: 2025-10-10 10:04:09.49724192 +0000 UTC m=+0.085575483 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct 10 10:04:09 compute-1 ceph-mon[79167]: pgmap v538: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:04:09 compute-1 sudo[221203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qownnannmnnpjijnsjqafubvbrxrdaps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090649.5919943-2989-154169725197445/AnsiballZ_systemd_service.py'
Oct 10 10:04:09 compute-1 sudo[221203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:10 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:04:10 compute-1 python3.9[221205]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:04:10 compute-1 sudo[221203]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:10 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a48003bb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:10 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a3c002cb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:10 compute-1 sudo[221356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brxtsualkkvuksbvapbqjrtexabyxwnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090650.444115-2989-278030824911227/AnsiballZ_systemd_service.py'
Oct 10 10:04:10 compute-1 sudo[221356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:11 compute-1 python3.9[221358]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:04:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:11.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:11 compute-1 sudo[221356]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:11.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:11 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a440021e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:11 compute-1 sudo[221510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsjsrxwhcqoxlcjvfhyyowbivtgxcvpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090651.3773448-2989-172696276597699/AnsiballZ_systemd_service.py'
Oct 10 10:04:11 compute-1 sudo[221510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:11 compute-1 ceph-mon[79167]: pgmap v539: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:04:12 compute-1 python3.9[221512]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:04:12 compute-1 sudo[221510]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:12 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50003d80 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:04:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:12 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a48003bb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:12 compute-1 sudo[221663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drwpgbjgrbmatxoafmbnnyhwbnymurzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090652.286698-2989-217852002111347/AnsiballZ_systemd_service.py'
Oct 10 10:04:12 compute-1 sudo[221663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:12 compute-1 python3.9[221665]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:04:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:13 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:04:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:13 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:04:13 compute-1 sudo[221663]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:13 compute-1 podman[221667]: 2025-10-10 10:04:13.157876519 +0000 UTC m=+0.118237527 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 10 10:04:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:13.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:13.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:13 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a3c002cb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:13 compute-1 sudo[221843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyhfzjindytdnnoykorlczvroctnelnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090653.2540874-2989-242484818810652/AnsiballZ_systemd_service.py'
Oct 10 10:04:13 compute-1 sudo[221843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:13 compute-1 ceph-mon[79167]: pgmap v540: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 10:04:13 compute-1 python3.9[221845]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:04:14 compute-1 sudo[221843]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:14 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a440021e0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:14 compute-1 sudo[221996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyoszodmidroqtpkqnyycacattopqfpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090654.149699-2989-132929300450000/AnsiballZ_systemd_service.py'
Oct 10 10:04:14 compute-1 sudo[221996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:14 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a50003d80 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:14 compute-1 python3.9[221998]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:04:14 compute-1 sudo[221996]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:15.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:15 compute-1 sudo[222112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:04:15 compute-1 sudo[222112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:04:15 compute-1 sudo[222112]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:15 compute-1 sudo[222175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzhldgqbwuwtydqzkfrurifnehzfhedk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090655.0177877-2989-52778185188630/AnsiballZ_systemd_service.py'
Oct 10 10:04:15 compute-1 sudo[222175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:15.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:15 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a48003bb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:15 compute-1 python3.9[222177]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:04:15 compute-1 sudo[222175]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:15 compute-1 ceph-mon[79167]: pgmap v541: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 10:04:16 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:16 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:04:16 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:16 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a3c002cb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:16 compute-1 sudo[222346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxupvzjosnsrtcbttndndzxokiiaywex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090656.1024785-3166-256967163262058/AnsiballZ_file.py'
Oct 10 10:04:16 compute-1 sudo[222346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:16 compute-1 podman[222304]: 2025-10-10 10:04:16.488647618 +0000 UTC m=+0.090804327 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:04:16 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:16 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a48003bb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:16 compute-1 python3.9[222352]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:04:16 compute-1 sudo[222346]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:04:17 compute-1 sudo[222502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syeiyvilerqpighqdzmyroradwcmesyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090656.8733203-3166-222889181839792/AnsiballZ_file.py'
Oct 10 10:04:17 compute-1 sudo[222502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:17.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:04:17 compute-1 python3.9[222504]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:04:17 compute-1 sudo[222502]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:17.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:17 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a24000b60 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:17 compute-1 sudo[222655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdnxvmprxpvncorypinzlymdrjrswzgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090657.545175-3166-187388961341611/AnsiballZ_file.py'
Oct 10 10:04:17 compute-1 sudo[222655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:17 compute-1 ceph-mon[79167]: pgmap v542: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Oct 10 10:04:18 compute-1 python3.9[222657]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:04:18 compute-1 sudo[222655]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:18 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a2c000d00 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:18 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a3c003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:18 compute-1 sudo[222807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnbdewnpocuyiaqtaitoiwzjfwaltwis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090658.251859-3166-150045278613201/AnsiballZ_file.py'
Oct 10 10:04:18 compute-1 sudo[222807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:18 compute-1 python3.9[222809]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:04:18 compute-1 sudo[222807]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:19.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:19 compute-1 sudo[222960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjrfjuivylvzbltpqthzerkxhmikkpam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090659.0136328-3166-76604697225267/AnsiballZ_file.py'
Oct 10 10:04:19 compute-1 sudo[222960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:19.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:19 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a3c003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:19 compute-1 python3.9[222962]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:04:19 compute-1 sudo[222960]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:19 compute-1 ceph-mon[79167]: pgmap v543: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:04:20 compute-1 sudo[223112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qodvyevvwesmqpueclpgutxgwkizfrnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090659.7555254-3166-90654531114781/AnsiballZ_file.py'
Oct 10 10:04:20 compute-1 sudo[223112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:20 compute-1 python3.9[223114]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:04:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:20 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a240016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:20 compute-1 sudo[223112]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:20 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a2c001820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:20 compute-1 sudo[223264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqixdropzdnqpafkqnowozttpzfsvmzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090660.4775321-3166-232799950671004/AnsiballZ_file.py'
Oct 10 10:04:20 compute-1 sudo[223264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:21 compute-1 python3.9[223266]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:04:21 compute-1 sudo[223264]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:21.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:21 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a48003bb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:21.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:21 compute-1 sudo[223417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqkffavclqdcaoctdfpwytgchtruuasu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090661.2166636-3166-276070231308671/AnsiballZ_file.py'
Oct 10 10:04:21 compute-1 sudo[223417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:21 compute-1 python3.9[223419]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:04:21 compute-1 sudo[223417]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:21 compute-1 ceph-mon[79167]: pgmap v544: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:04:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:22 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a3c003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:04:22 compute-1 sudo[223569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kftujxbczcptkdxqjnzwzlktvesznosz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090662.062105-3337-117280285252537/AnsiballZ_file.py'
Oct 10 10:04:22 compute-1 sudo[223569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100422 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:04:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:22 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a240016a0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:22 compute-1 python3.9[223571]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:04:22 compute-1 sudo[223569]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:23 compute-1 sudo[223721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avjhyfjerpycapwcioaspmlofucedpbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090662.7589493-3337-164979533336677/AnsiballZ_file.py'
Oct 10 10:04:23 compute-1 sudo[223721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:23.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:23 compute-1 python3.9[223723]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:04:23 compute-1 sudo[223721]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:23 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a2c001820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:04:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:23.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:04:23 compute-1 sudo[223874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouuyesuojaoutzlupfhtgyvsydwbwgvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090663.469271-3337-57148562120772/AnsiballZ_file.py'
Oct 10 10:04:23 compute-1 sudo[223874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:23 compute-1 ceph-mon[79167]: pgmap v545: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:04:24 compute-1 python3.9[223876]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:04:24 compute-1 sudo[223874]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:24 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a48003bb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:24 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a3c003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:24 compute-1 sudo[224026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bckfvkfqqsjvgdeakfvzczcbetgnzuaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090664.23365-3337-15689549462471/AnsiballZ_file.py'
Oct 10 10:04:24 compute-1 sudo[224026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:24 compute-1 python3.9[224028]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:04:24 compute-1 sudo[224026]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:25.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:25 compute-1 sudo[224178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzlspoqakolehtcwwvduwxkgvmlikaoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090664.9508762-3337-245590745691374/AnsiballZ_file.py'
Oct 10 10:04:25 compute-1 sudo[224178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:25 compute-1 python3.9[224180]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:04:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:25 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a3c003db0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:04:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:25.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:04:25 compute-1 sudo[224178]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:25 compute-1 ceph-mon[79167]: pgmap v546: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:04:25 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Oct 10 10:04:25 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:04:25.996673) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 10:04:25 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Oct 10 10:04:25 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090665996719, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 931, "num_deletes": 251, "total_data_size": 2009372, "memory_usage": 2038960, "flush_reason": "Manual Compaction"}
Oct 10 10:04:25 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Oct 10 10:04:26 compute-1 sudo[224331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iedrjlmwgrusleiwdtrvdgrdachwfdff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090665.6650405-3337-253740262406727/AnsiballZ_file.py'
Oct 10 10:04:26 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090666007858, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 1327534, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19818, "largest_seqno": 20744, "table_properties": {"data_size": 1323321, "index_size": 1929, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9372, "raw_average_key_size": 19, "raw_value_size": 1314853, "raw_average_value_size": 2727, "num_data_blocks": 86, "num_entries": 482, "num_filter_entries": 482, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760090594, "oldest_key_time": 1760090594, "file_creation_time": 1760090665, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:04:26 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 11259 microseconds, and 6950 cpu microseconds.
Oct 10 10:04:26 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:04:26 compute-1 sudo[224331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:26 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:04:26.007930) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 1327534 bytes OK
Oct 10 10:04:26 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:04:26.007957) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Oct 10 10:04:26 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:04:26.011482) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Oct 10 10:04:26 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:04:26.011537) EVENT_LOG_v1 {"time_micros": 1760090666011525, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 10:04:26 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:04:26.011564) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 10:04:26 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 2004680, prev total WAL file size 2004680, number of live WAL files 2.
Oct 10 10:04:26 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:04:26 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:04:26.012667) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Oct 10 10:04:26 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 10:04:26 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(1296KB)], [36(13MB)]
Oct 10 10:04:26 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090666012736, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 15030547, "oldest_snapshot_seqno": -1}
Oct 10 10:04:26 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 4971 keys, 12865629 bytes, temperature: kUnknown
Oct 10 10:04:26 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090666080532, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 12865629, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12831618, "index_size": 20461, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12485, "raw_key_size": 126838, "raw_average_key_size": 25, "raw_value_size": 12740534, "raw_average_value_size": 2562, "num_data_blocks": 839, "num_entries": 4971, "num_filter_entries": 4971, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760090666, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:04:26 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:04:26 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:04:26.080851) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 12865629 bytes
Oct 10 10:04:26 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:04:26.082283) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 221.3 rd, 189.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 13.1 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(21.0) write-amplify(9.7) OK, records in: 5487, records dropped: 516 output_compression: NoCompression
Oct 10 10:04:26 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:04:26.082313) EVENT_LOG_v1 {"time_micros": 1760090666082300, "job": 20, "event": "compaction_finished", "compaction_time_micros": 67904, "compaction_time_cpu_micros": 45499, "output_level": 6, "num_output_files": 1, "total_output_size": 12865629, "num_input_records": 5487, "num_output_records": 4971, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 10:04:26 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:04:26 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090666082863, "job": 20, "event": "table_file_deletion", "file_number": 38}
Oct 10 10:04:26 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:04:26 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090666086801, "job": 20, "event": "table_file_deletion", "file_number": 36}
Oct 10 10:04:26 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:04:26.012554) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:04:26 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:04:26.086915) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:04:26 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:04:26.086922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:04:26 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:04:26.086926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:04:26 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:04:26.086929) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:04:26 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:04:26.086932) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:04:26 compute-1 python3.9[224333]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:04:26 compute-1 sudo[224331]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:26 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a2c001820 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:26 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a48003bb0 fd 47 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:26 compute-1 sudo[224483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atubqupfpqhlyvmoqartttictxiynbch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090666.380389-3337-150565311436297/AnsiballZ_file.py'
Oct 10 10:04:26 compute-1 sudo[224483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:26 compute-1 python3.9[224485]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:04:26 compute-1 sudo[224483]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:27.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:04:27 compute-1 sudo[224636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyxueaehvjlltwartxwzvtjcgkrrcxvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090667.084395-3337-278727201950811/AnsiballZ_file.py'
Oct 10 10:04:27 compute-1 sudo[224636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[215905]: 10/10/2025 10:04:27 : epoch 68e8d9f3 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6a24002720 fd 47 proxy ignored for local
Oct 10 10:04:27 compute-1 kernel: ganesha.nfsd[222179]: segfault at 50 ip 00007f6b02d4732e sp 00007f6ad17f9210 error 4 in libntirpc.so.5.8[7f6b02d2c000+2c000] likely on CPU 3 (core 0, socket 3)
Oct 10 10:04:27 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 10:04:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:04:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:27.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:04:27 compute-1 systemd[1]: Started Process Core Dump (PID 224639/UID 0).
Oct 10 10:04:27 compute-1 python3.9[224638]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:04:27 compute-1 sudo[224636]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:28 compute-1 ceph-mon[79167]: pgmap v547: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:04:28 compute-1 sudo[224790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfwbhpvjmxcptiljamtsdvdwdtaqwduu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090667.9965534-3511-53719418309128/AnsiballZ_command.py'
Oct 10 10:04:28 compute-1 sudo[224790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:28 compute-1 systemd-coredump[224640]: Process 215909 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 55:
                                                    #0  0x00007f6b02d4732e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 10 10:04:28 compute-1 python3.9[224792]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:04:28 compute-1 sudo[224790]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:28 compute-1 systemd[1]: systemd-coredump@8-224639-0.service: Deactivated successfully.
Oct 10 10:04:28 compute-1 systemd[1]: systemd-coredump@8-224639-0.service: Consumed 1.002s CPU time.
Oct 10 10:04:28 compute-1 podman[224799]: 2025-10-10 10:04:28.672297826 +0000 UTC m=+0.042313119 container died 37e8592f37054ae63e3280e5a3a91716481e4ea058c2c324979351a49841a1a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 10 10:04:28 compute-1 systemd[1]: var-lib-containers-storage-overlay-fac92ad902689e585fa54275be7aa322ccd3fe3e1d273fc9cc6be5c234ab6b30-merged.mount: Deactivated successfully.
Oct 10 10:04:28 compute-1 podman[224799]: 2025-10-10 10:04:28.72064858 +0000 UTC m=+0.090663843 container remove 37e8592f37054ae63e3280e5a3a91716481e4ea058c2c324979351a49841a1a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:04:28 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Main process exited, code=exited, status=139/n/a
Oct 10 10:04:28 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Failed with result 'exit-code'.
Oct 10 10:04:28 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 1.522s CPU time.
Oct 10 10:04:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:29.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:29.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:29 compute-1 python3.9[224990]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 10 10:04:30 compute-1 ceph-mon[79167]: pgmap v548: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:04:30 compute-1 sudo[225140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inipkcttxpqpzzaklunedxifjqvbxhcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090669.9938924-3565-101262607211598/AnsiballZ_systemd_service.py'
Oct 10 10:04:30 compute-1 sudo[225140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:30 compute-1 python3.9[225142]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 10:04:30 compute-1 systemd[1]: Reloading.
Oct 10 10:04:30 compute-1 systemd-rc-local-generator[225170]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:04:30 compute-1 systemd-sysv-generator[225174]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:04:31 compute-1 sudo[225140]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:31.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:31.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:31 compute-1 sudo[225329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgmkdenkqrfaiccyeasbheutugctgult ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090671.3074434-3589-122976776365324/AnsiballZ_command.py'
Oct 10 10:04:31 compute-1 sudo[225329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:31 compute-1 python3.9[225331]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:04:31 compute-1 sudo[225329]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:32 compute-1 ceph-mon[79167]: pgmap v549: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:04:32 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:04:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:04:32 compute-1 sudo[225482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvpjbabnkwerhrtwijpjurraoceaclvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090672.0027072-3589-84172664727381/AnsiballZ_command.py'
Oct 10 10:04:32 compute-1 sudo[225482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:32 compute-1 sudo[225485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:04:32 compute-1 sudo[225485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:04:32 compute-1 sudo[225485]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:32 compute-1 sudo[225510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Oct 10 10:04:32 compute-1 sudo[225510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:04:32 compute-1 python3.9[225484]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:04:32 compute-1 sudo[225482]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:32 compute-1 sudo[225510]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:33 compute-1 sudo[225679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:04:33 compute-1 sudo[225679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:04:33 compute-1 sudo[225679]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:33 compute-1 sudo[225731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hogwpecouqkomcivakyzdnhrrwhailnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090672.8151824-3589-40641004085078/AnsiballZ_command.py'
Oct 10 10:04:33 compute-1 sudo[225731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:33 compute-1 sudo[225732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:04:33 compute-1 sudo[225732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:04:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:04:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:33.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:04:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100433 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:04:33 compute-1 python3.9[225745]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:04:33 compute-1 sudo[225731]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100433 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:04:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:04:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:33.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:04:33 compute-1 unix_chkpwd[225822]: password check failed for user (root)
Oct 10 10:04:33 compute-1 sshd-session[225613]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Oct 10 10:04:33 compute-1 sudo[225732]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:33 compute-1 ceph-mon[79167]: pgmap v550: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:04:33 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:04:33 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:04:33 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:04:33 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:04:33 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:04:33 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:04:33 compute-1 sudo[225941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwuvufjoctzsejfzswsztccfwskbelsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090673.580287-3589-106659235407250/AnsiballZ_command.py'
Oct 10 10:04:33 compute-1 sudo[225941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:34 compute-1 python3.9[225943]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:04:34 compute-1 sudo[225941]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:34 compute-1 sudo[226094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbytgxmzutzmyhcdwuqzrcdtauutljfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090674.290535-3589-204419772111270/AnsiballZ_command.py'
Oct 10 10:04:34 compute-1 sudo[226094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:34 compute-1 python3.9[226096]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:04:34 compute-1 sudo[226094]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:34 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:04:34 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:04:34 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:04:34 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:04:34 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:04:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:04:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:35.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:04:35 compute-1 sudo[226247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlhbfjbzhfcdidoocffzhwksyudpksql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090675.0020761-3589-168327059175018/AnsiballZ_command.py'
Oct 10 10:04:35 compute-1 sudo[226247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:35 compute-1 sudo[226251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:04:35 compute-1 sudo[226251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:04:35 compute-1 sudo[226251]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:35.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:35 compute-1 python3.9[226249]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:04:35 compute-1 sudo[226247]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:35 compute-1 sshd-session[225613]: Failed password for root from 80.94.93.233 port 60050 ssh2
Oct 10 10:04:35 compute-1 unix_chkpwd[226376]: password check failed for user (root)
Oct 10 10:04:35 compute-1 ceph-mon[79167]: pgmap v551: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:04:36 compute-1 sudo[226427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jttdygtjtpaccttokyprqzjfmjakapaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090675.7170537-3589-138546781937710/AnsiballZ_command.py'
Oct 10 10:04:36 compute-1 sudo[226427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:36 compute-1 python3.9[226429]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:04:36 compute-1 sudo[226427]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:36 compute-1 sudo[226580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkixsttnyzekmzuvpbhmcxlbfbcogtch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090676.4744968-3589-233109159478911/AnsiballZ_command.py'
Oct 10 10:04:36 compute-1 sudo[226580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:37 compute-1 python3.9[226582]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 10 10:04:37 compute-1 sudo[226580]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:37.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:04:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:37.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:37 compute-1 ceph-mon[79167]: pgmap v552: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:04:38 compute-1 sshd-session[225613]: Failed password for root from 80.94.93.233 port 60050 ssh2
Oct 10 10:04:38 compute-1 sudo[226734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdummndqqjswekrdilukaldirlgevejw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090678.3010824-3796-227647886880834/AnsiballZ_file.py'
Oct 10 10:04:38 compute-1 sudo[226734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:38 compute-1 python3.9[226736]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:04:38 compute-1 sudo[226734]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:39 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Scheduled restart job, restart counter is at 9.
Oct 10 10:04:39 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:04:39 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 1.522s CPU time.
Oct 10 10:04:39 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 10:04:39 compute-1 sudo[226744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:04:39 compute-1 sudo[226744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:04:39 compute-1 sudo[226744]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:39.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:39 compute-1 podman[226915]: 2025-10-10 10:04:39.323032741 +0000 UTC m=+0.056866007 container create d9857f148c0b09e0041b575e39e53db43b365e45c1430ca88b3c2539bad267b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:04:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/693527e3052a34160e40c573904d9d4bb456ea383bca413c0394b60bb4ac049a/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 10:04:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/693527e3052a34160e40c573904d9d4bb456ea383bca413c0394b60bb4ac049a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:04:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/693527e3052a34160e40c573904d9d4bb456ea383bca413c0394b60bb4ac049a/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:04:39 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/693527e3052a34160e40c573904d9d4bb456ea383bca413c0394b60bb4ac049a/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.mssvzx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:04:39 compute-1 sudo[226993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-augqejfdizkknuggravgufgypnjemmrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090679.0734265-3796-101255083815175/AnsiballZ_file.py'
Oct 10 10:04:39 compute-1 podman[226915]: 2025-10-10 10:04:39.299126817 +0000 UTC m=+0.032960133 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:04:39 compute-1 sudo[226993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:39 compute-1 podman[226946]: 2025-10-10 10:04:39.413006154 +0000 UTC m=+0.076916006 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 10 10:04:39 compute-1 podman[226915]: 2025-10-10 10:04:39.424489849 +0000 UTC m=+0.158323115 container init d9857f148c0b09e0041b575e39e53db43b365e45c1430ca88b3c2539bad267b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 10 10:04:39 compute-1 podman[226915]: 2025-10-10 10:04:39.432109957 +0000 UTC m=+0.165943223 container start d9857f148c0b09e0041b575e39e53db43b365e45c1430ca88b3c2539bad267b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 10 10:04:39 compute-1 bash[226915]: d9857f148c0b09e0041b575e39e53db43b365e45c1430ca88b3c2539bad267b9
Oct 10 10:04:39 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:04:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:39 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 10:04:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:39 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 10:04:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:39 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 10:04:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:39 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 10:04:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:39 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 10:04:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:39 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 10:04:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:39 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 10:04:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:04:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:39.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:04:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:39 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:04:39 compute-1 python3.9[226999]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:04:39 compute-1 sudo[226993]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:39 compute-1 podman[227039]: 2025-10-10 10:04:39.740464538 +0000 UTC m=+0.082695354 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=multipathd)
Oct 10 10:04:39 compute-1 ceph-mon[79167]: pgmap v553: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:04:39 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:04:39 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:04:40 compute-1 sudo[227212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wirkpihqcnfaslnssgizybwzxyxokkzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090679.7963233-3796-168083011147629/AnsiballZ_file.py'
Oct 10 10:04:40 compute-1 sudo[227212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:40 compute-1 unix_chkpwd[227215]: password check failed for user (root)
Oct 10 10:04:40 compute-1 sshd-session[227040]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=78.128.112.74  user=root
Oct 10 10:04:40 compute-1 python3.9[227214]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:04:40 compute-1 sudo[227212]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:40 compute-1 unix_chkpwd[227220]: password check failed for user (root)
Oct 10 10:04:40 compute-1 sudo[227366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leeakkqikcapamvoivowwupjeklbffru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090680.5889416-3862-236513407745258/AnsiballZ_file.py'
Oct 10 10:04:40 compute-1 sudo[227366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:41 compute-1 python3.9[227368]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:04:41 compute-1 sudo[227366]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:04:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:41.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:04:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:41.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:41 compute-1 sudo[227519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgchvzupbueuvbjbfzvysldllgpqrept ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090681.3818994-3862-228805055711269/AnsiballZ_file.py'
Oct 10 10:04:41 compute-1 sudo[227519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:41 compute-1 ceph-mon[79167]: pgmap v554: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Oct 10 10:04:41 compute-1 python3.9[227521]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:04:41 compute-1 sudo[227519]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:04:42.197 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:04:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:04:42.198 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:04:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:04:42.198 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:04:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:04:42 compute-1 sshd-session[227040]: Failed password for root from 78.128.112.74 port 54524 ssh2
Oct 10 10:04:42 compute-1 sudo[227671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgptclkjnafjearsofmxoxymcahozudc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090682.120048-3862-205053697389690/AnsiballZ_file.py'
Oct 10 10:04:42 compute-1 sudo[227671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:42 compute-1 sshd-session[225613]: Failed password for root from 80.94.93.233 port 60050 ssh2
Oct 10 10:04:42 compute-1 sshd-session[227040]: Connection closed by authenticating user root 78.128.112.74 port 54524 [preauth]
Oct 10 10:04:42 compute-1 sshd-session[225613]: Received disconnect from 80.94.93.233 port 60050:11:  [preauth]
Oct 10 10:04:42 compute-1 sshd-session[225613]: Disconnected from authenticating user root 80.94.93.233 port 60050 [preauth]
Oct 10 10:04:42 compute-1 sshd-session[225613]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Oct 10 10:04:42 compute-1 python3.9[227673]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:04:42 compute-1 sudo[227671]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:43 compute-1 sudo[227825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcsridbldxcjuaniexowevqarjjddroz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090682.8946466-3862-59841094263782/AnsiballZ_file.py'
Oct 10 10:04:43 compute-1 sudo[227825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:04:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:43.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:04:43 compute-1 podman[227827]: 2025-10-10 10:04:43.431804499 +0000 UTC m=+0.179669919 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 10 10:04:43 compute-1 python3.9[227828]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:04:43 compute-1 sudo[227825]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:43.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:43 compute-1 unix_chkpwd[227876]: password check failed for user (root)
Oct 10 10:04:43 compute-1 sshd-session[227697]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Oct 10 10:04:43 compute-1 ceph-mon[79167]: pgmap v555: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 170 B/s wr, 1 op/s
Oct 10 10:04:44 compute-1 sudo[228005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnbrytvteaxvamafxajqsmrnydnxxlld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090683.669812-3862-43519693411519/AnsiballZ_file.py'
Oct 10 10:04:44 compute-1 sudo[228005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:44 compute-1 python3.9[228007]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:04:44 compute-1 sudo[228005]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:44 compute-1 sudo[228157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avoxamtzmaiczhprnzjcfsxxvnpkxccq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090684.4090335-3862-21561385218200/AnsiballZ_file.py'
Oct 10 10:04:44 compute-1 sudo[228157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:44 compute-1 python3.9[228159]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:04:44 compute-1 sudo[228157]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:45.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:45 compute-1 sudo[228310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxlfvcjdmfymaapabgscdiaxymzqxxfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090685.104159-3862-27653265677184/AnsiballZ_file.py'
Oct 10 10:04:45 compute-1 sudo[228310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:04:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:45.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:04:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:45 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:04:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:45 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:04:45 compute-1 python3.9[228312]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:04:45 compute-1 sudo[228310]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:45 compute-1 sshd-session[227697]: Failed password for root from 80.94.93.233 port 42804 ssh2
Oct 10 10:04:45 compute-1 ceph-mon[79167]: pgmap v556: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Oct 10 10:04:46 compute-1 sudo[228462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipaewyvobesyndyeplrysgdhpdhwojsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090685.8249574-3862-219301407982878/AnsiballZ_file.py'
Oct 10 10:04:46 compute-1 sudo[228462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:46 compute-1 ceph-osd[76867]: bluestore.MempoolThread fragmentation_score=0.000030 took=0.000038s
Oct 10 10:04:46 compute-1 python3.9[228464]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:04:46 compute-1 sudo[228462]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:46 compute-1 sudo[228626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-recrwsxubcvrdblubjtlhmbyszlnmhri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090686.512765-3862-200363741625596/AnsiballZ_file.py'
Oct 10 10:04:46 compute-1 sudo[228626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:46 compute-1 podman[228588]: 2025-10-10 10:04:46.90835302 +0000 UTC m=+0.095844685 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 10:04:46 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:04:47 compute-1 python3.9[228632]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:04:47 compute-1 sudo[228626]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:04:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:47.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:04:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:04:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:47.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:47 compute-1 ceph-mon[79167]: pgmap v557: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Oct 10 10:04:48 compute-1 unix_chkpwd[228660]: password check failed for user (root)
Oct 10 10:04:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:48 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:04:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:48 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:04:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:48 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:04:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:48 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:04:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:49.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:49.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:49 compute-1 ceph-mon[79167]: pgmap v558: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:04:50 compute-1 sshd-session[227697]: Failed password for root from 80.94.93.233 port 42804 ssh2
Oct 10 10:04:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:51.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:51.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:04:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 10:04:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 10:04:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 10:04:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 10:04:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 10:04:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 10:04:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:04:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:04:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:04:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 10:04:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:04:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 10:04:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 10:04:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 10:04:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 10:04:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 10:04:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 10:04:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 10:04:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 10:04:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 10:04:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 10:04:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 10:04:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 10:04:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 10:04:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 10:04:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 10:04:51 compute-1 ceph-mon[79167]: pgmap v559: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:04:52 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:52 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:04:52 compute-1 unix_chkpwd[228730]: password check failed for user (root)
Oct 10 10:04:52 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:52 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:52 compute-1 sudo[228804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebjfqezunujtczgobaicnxiwxfqwvcwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090692.453998-4229-165420178970749/AnsiballZ_getent.py'
Oct 10 10:04:52 compute-1 sudo[228804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:53 compute-1 python3.9[228806]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct 10 10:04:53 compute-1 sudo[228804]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:04:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:53.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:04:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:53 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:53.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:53 compute-1 sudo[228958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aianisumjhdwznjbzcggkcsfbwiswjjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090693.3919458-4253-11593441364895/AnsiballZ_group.py'
Oct 10 10:04:53 compute-1 sudo[228958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:53 compute-1 ceph-mon[79167]: pgmap v560: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Oct 10 10:04:54 compute-1 python3.9[228960]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 10 10:04:54 compute-1 groupadd[228961]: group added to /etc/group: name=nova, GID=42436
Oct 10 10:04:54 compute-1 groupadd[228961]: group added to /etc/gshadow: name=nova
Oct 10 10:04:54 compute-1 groupadd[228961]: new group: name=nova, GID=42436
Oct 10 10:04:54 compute-1 sudo[228958]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:54 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:54 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:54 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:54 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ad4000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:54 compute-1 sshd-session[227697]: Failed password for root from 80.94.93.233 port 42804 ssh2
Oct 10 10:04:55 compute-1 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 10:04:55 compute-1 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 9078 writes, 35K keys, 9078 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 9078 writes, 2064 syncs, 4.40 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 776 writes, 1221 keys, 776 commit groups, 1.0 writes per commit group, ingest: 0.40 MB, 0.00 MB/s
                                           Interval WAL: 776 writes, 366 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 10 10:04:55 compute-1 sudo[229116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdtvgdnqnfktpdemdmztljcvcfewywcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090694.4400172-4277-241860209195821/AnsiballZ_user.py'
Oct 10 10:04:55 compute-1 sudo[229116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:04:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:04:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:55.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:04:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100455 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 1ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:04:55 compute-1 python3.9[229118]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-1 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 10 10:04:55 compute-1 useradd[229121]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Oct 10 10:04:55 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 10:04:55 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 10:04:55 compute-1 useradd[229121]: add 'nova' to group 'libvirt'
Oct 10 10:04:55 compute-1 useradd[229121]: add 'nova' to shadow group 'libvirt'
Oct 10 10:04:55 compute-1 sudo[229123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:04:55 compute-1 sudo[229123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:04:55 compute-1 sudo[229123]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100455 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:04:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:55 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:55 compute-1 sudo[229116]: pam_unix(sudo:session): session closed for user root
Oct 10 10:04:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:04:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:55.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:04:56 compute-1 ceph-mon[79167]: pgmap v561: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:04:56 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:56 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:56 compute-1 sshd-session[229178]: Accepted publickey for zuul from 192.168.122.30 port 35038 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 10:04:56 compute-1 systemd-logind[789]: New session 56 of user zuul.
Oct 10 10:04:56 compute-1 systemd[1]: Started Session 56 of User zuul.
Oct 10 10:04:56 compute-1 sshd-session[229178]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 10:04:56 compute-1 sshd-session[229181]: Received disconnect from 192.168.122.30 port 35038:11: disconnected by user
Oct 10 10:04:56 compute-1 sshd-session[229181]: Disconnected from user zuul 192.168.122.30 port 35038
Oct 10 10:04:56 compute-1 sshd-session[229178]: pam_unix(sshd:session): session closed for user zuul
Oct 10 10:04:56 compute-1 systemd[1]: session-56.scope: Deactivated successfully.
Oct 10 10:04:56 compute-1 systemd-logind[789]: Session 56 logged out. Waiting for processes to exit.
Oct 10 10:04:56 compute-1 systemd-logind[789]: Removed session 56.
Oct 10 10:04:56 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:56 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:57 compute-1 sshd-session[227697]: Received disconnect from 80.94.93.233 port 42804:11:  [preauth]
Oct 10 10:04:57 compute-1 sshd-session[227697]: Disconnected from authenticating user root 80.94.93.233 port 42804 [preauth]
Oct 10 10:04:57 compute-1 sshd-session[227697]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Oct 10 10:04:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:57.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:57 compute-1 python3.9[229331]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:04:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:04:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:57 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ad4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:04:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:57.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:04:57 compute-1 unix_chkpwd[229456]: password check failed for user (root)
Oct 10 10:04:57 compute-1 sshd-session[229332]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Oct 10 10:04:57 compute-1 python3.9[229455]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090696.8126714-4353-257499300507692/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:04:58 compute-1 ceph-mon[79167]: pgmap v562: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:04:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:58 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:58 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:58 compute-1 python3.9[229606]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:04:59 compute-1 python3.9[229682]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:04:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:04:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:04:59.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:04:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:04:59 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:04:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:04:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:04:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:04:59.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:04:59 compute-1 sshd-session[229332]: Failed password for root from 80.94.93.233 port 54588 ssh2
Oct 10 10:04:59 compute-1 python3.9[229833]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:05:00 compute-1 ceph-mon[79167]: pgmap v563: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:05:00 compute-1 unix_chkpwd[229881]: password check failed for user (root)
Oct 10 10:05:00 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:00 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:00 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:00 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ad4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:00 compute-1 python3.9[229955]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090699.3529606-4353-77381754995880/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:05:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:01.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:01 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:01.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:01 compute-1 python3.9[230106]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:05:02 compute-1 ceph-mon[79167]: pgmap v564: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Oct 10 10:05:02 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:05:02 compute-1 python3.9[230227]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090700.9652262-4353-247935045104795/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=bc7f3bb7d4094c596a18178a888511b54e157ba4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:05:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:05:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:02 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af00091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:02 compute-1 sshd-session[229332]: Failed password for root from 80.94.93.233 port 54588 ssh2
Oct 10 10:05:02 compute-1 unix_chkpwd[230304]: password check failed for user (root)
Oct 10 10:05:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:02 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:02 compute-1 python3.9[230378]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:05:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:03.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:03 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ad4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:03.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:03 compute-1 python3.9[230500]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090702.3959923-4353-239494197958387/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:05:03 compute-1 sshd-session[229332]: Failed password for root from 80.94.93.233 port 54588 ssh2
Oct 10 10:05:04 compute-1 ceph-mon[79167]: pgmap v565: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 170 B/s wr, 1 op/s
Oct 10 10:05:04 compute-1 sudo[230650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gftlzuexkzxxcjzvkrqwwbdlhzldhgvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090703.9147172-4559-253912644241126/AnsiballZ_file.py'
Oct 10 10:05:04 compute-1 sudo[230650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:04 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:04 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:04 compute-1 python3.9[230652]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:05:04 compute-1 sudo[230650]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:04 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:04 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af00091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:04 compute-1 sshd-session[229332]: Received disconnect from 80.94.93.233 port 54588:11:  [preauth]
Oct 10 10:05:04 compute-1 sshd-session[229332]: Disconnected from authenticating user root 80.94.93.233 port 54588 [preauth]
Oct 10 10:05:04 compute-1 sshd-session[229332]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.233  user=root
Oct 10 10:05:05 compute-1 sudo[230802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtzodpjaxdwmiavbjkfppkquqidjyhgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090704.715677-4583-107831335087623/AnsiballZ_copy.py'
Oct 10 10:05:05 compute-1 sudo[230802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:05.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:05 compute-1 python3.9[230804]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:05:05 compute-1 sudo[230802]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:05 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:05.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:05 compute-1 sudo[230955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsdflwjryuvjmeodxvvadpuvoryvxkyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090705.5954072-4607-139664363254156/AnsiballZ_stat.py'
Oct 10 10:05:05 compute-1 sudo[230955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:06 compute-1 ceph-mon[79167]: pgmap v566: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:06 compute-1 python3.9[230957]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:05:06 compute-1 sudo[230955]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:06 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ad4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:06 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:06 compute-1 sudo[231107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzdelfvxfuvwlitsbdmwpzsvulugqncv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090706.3999648-4631-207074107049293/AnsiballZ_stat.py'
Oct 10 10:05:06 compute-1 sudo[231107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:06 compute-1 python3.9[231109]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:05:06 compute-1 sudo[231107]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:07.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:05:07 compute-1 sudo[231231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqaqsvwcjmumniheeywqkgyxwtyhdgrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090706.3999648-4631-207074107049293/AnsiballZ_copy.py'
Oct 10 10:05:07 compute-1 sudo[231231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:07 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:07.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:07 compute-1 python3.9[231233]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1760090706.3999648-4631-207074107049293/.source _original_basename=.2_y5ig3x follow=False checksum=30e51ddc5f6ccaebeef930e549a0be2e1fe3dd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct 10 10:05:07 compute-1 sudo[231231]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:08 compute-1 ceph-mon[79167]: pgmap v567: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:08 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:08 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ad4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:08 compute-1 python3.9[231385]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:05:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:09.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:09 compute-1 python3.9[231537]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:05:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:09 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:05:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:09.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:05:09 compute-1 podman[231634]: 2025-10-10 10:05:09.916434297 +0000 UTC m=+0.073770690 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 10 10:05:09 compute-1 podman[231633]: 2025-10-10 10:05:09.954629933 +0000 UTC m=+0.104770080 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 10 10:05:10 compute-1 python3.9[231684]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090708.9406898-4709-124333478308629/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=837ffd9c004e5987a2e117698c56827ebbfeb5b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:05:10 compute-1 ceph-mon[79167]: pgmap v568: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:10 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:10 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:10 compute-1 python3.9[231847]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 10 10:05:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:05:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:11.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:05:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:11 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ad4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:11 compute-1 python3.9[231968]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760090710.3358572-4754-149086951529121/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=722ab36345f3375cbdcf911ce8f6e1a8083d7e59 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 10 10:05:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:11.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:12 compute-1 ceph-mon[79167]: pgmap v569: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:12 compute-1 sudo[232119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfzchrjufaupfclesrqwdqpiccgzyiyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090712.0068562-4805-198203351552528/AnsiballZ_container_config_data.py'
Oct 10 10:05:12 compute-1 sudo[232119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:05:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:12 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:12 compute-1 python3.9[232121]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct 10 10:05:12 compute-1 sudo[232119]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:12 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:13.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:13 compute-1 sudo[232271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diiwlxjyhewkcbjbsutlsvzaiosapuoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090712.930629-4832-181549242030286/AnsiballZ_container_config_hash.py'
Oct 10 10:05:13 compute-1 sudo[232271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:13 compute-1 python3.9[232273]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 10 10:05:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:13 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:13 compute-1 sudo[232271]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:13.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:14 compute-1 podman[232304]: 2025-10-10 10:05:14.016148208 +0000 UTC m=+0.122444643 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:05:14 compute-1 ceph-mon[79167]: pgmap v570: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:05:14 compute-1 sudo[232450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbibaghxmjcgrwrtbxtxjqtnesaioyxd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760090713.9145687-4862-172057522935022/AnsiballZ_edpm_container_manage.py'
Oct 10 10:05:14 compute-1 sudo[232450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:14 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ad4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:14 compute-1 python3[232452]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct 10 10:05:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:14 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:15 compute-1 ceph-mon[79167]: pgmap v571: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:15.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:15 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:15 compute-1 sudo[232489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:05:15 compute-1 sudo[232489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:05:15 compute-1 sudo[232489]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:05:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:15.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:05:16 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:16 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:05:16 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:16 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ad4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:17.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:05:17 compute-1 ceph-mon[79167]: pgmap v572: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:17 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:05:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:17.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:05:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:18 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:18 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:05:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:19.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:05:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:19 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ad4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:05:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:19.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:05:20 compute-1 ceph-mon[79167]: pgmap v573: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:20 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:20 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:20 compute-1 podman[232533]: 2025-10-10 10:05:20.951350489 +0000 UTC m=+3.055921527 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 10:05:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:05:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:21.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:05:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:21 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:05:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:21.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:05:21 compute-1 ceph-mon[79167]: pgmap v574: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:22 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 10:05:22 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3867 writes, 21K keys, 3867 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.05 MB/s
                                           Cumulative WAL: 3867 writes, 3867 syncs, 1.00 writes per sync, written: 0.05 GB, 0.05 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1408 writes, 6822 keys, 1408 commit groups, 1.0 writes per commit group, ingest: 16.38 MB, 0.03 MB/s
                                           Interval WAL: 1408 writes, 1408 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    146.9      0.22              0.11        10    0.022       0      0       0.0       0.0
                                             L6      1/0   12.27 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.5    202.8    172.1      0.65              0.37         9    0.072     43K   4821       0.0       0.0
                                            Sum      1/0   12.27 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.5    151.7    165.8      0.87              0.48        19    0.046     43K   4821       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   5.4    161.2    161.6      0.39              0.23         8    0.049     22K   2562       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    202.8    172.1      0.65              0.37         9    0.072     43K   4821       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    148.6      0.22              0.11         9    0.024       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.031, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.14 GB write, 0.12 MB/s write, 0.13 GB read, 0.11 MB/s read, 0.9 seconds
                                           Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5625d3e63350#2 capacity: 304.00 MB usage: 8.77 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 7.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(481,8.41 MB,2.76645%) FilterBlock(19,130.05 KB,0.041776%) IndexBlock(19,240.70 KB,0.0773229%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 10 10:05:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:05:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:22 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ad4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:22 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:23.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:23 compute-1 ceph-mon[79167]: pgmap v575: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:05:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:23 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:23.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:24 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:24 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:25.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:25 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:25.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:26 compute-1 ceph-mon[79167]: pgmap v576: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:26 compute-1 podman[232465]: 2025-10-10 10:05:26.203358372 +0000 UTC m=+11.634791103 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct 10 10:05:26 compute-1 podman[232600]: 2025-10-10 10:05:26.348822254 +0000 UTC m=+0.048193730 container create aa3b7ea7563e4765bb54fd4ce4b9fe769a740174fd50f746c6e754671da4d69c (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=nova_compute_init, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct 10 10:05:26 compute-1 podman[232600]: 2025-10-10 10:05:26.322432211 +0000 UTC m=+0.021803677 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct 10 10:05:26 compute-1 python3[232452]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct 10 10:05:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:26 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:26 compute-1 sudo[232450]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:26 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:27 compute-1 sudo[232788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drczmbsxmprwxfhslapebbogrjjnwaaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090726.781888-4886-54117713115670/AnsiballZ_stat.py'
Oct 10 10:05:27 compute-1 sudo[232788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:27 compute-1 ceph-mon[79167]: pgmap v577: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:27.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:05:27 compute-1 python3.9[232790]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:05:27 compute-1 sudo[232788]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:27 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:27.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:28 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:28 compute-1 sudo[232943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkmbqijhmqazuoexbpnelmwebagdqwjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090728.1266458-4922-6366801494103/AnsiballZ_container_config_data.py'
Oct 10 10:05:28 compute-1 sudo[232943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:28 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:28 compute-1 python3.9[232945]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct 10 10:05:28 compute-1 sudo[232943]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:29.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:29 compute-1 sudo[233096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdwiieoqmqfqlvsayvdnbsrqnuyyyiyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090729.0296717-4949-48089944444015/AnsiballZ_container_config_hash.py'
Oct 10 10:05:29 compute-1 sudo[233096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:29 compute-1 ceph-mon[79167]: pgmap v578: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:29 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:05:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:29.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:05:29 compute-1 python3.9[233098]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 10 10:05:29 compute-1 sudo[233096]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:30 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:30 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:30 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:30 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:30 compute-1 sudo[233248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsbvypxgrbpacqykgjgptbahttzucwvk ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760090730.2170613-4979-35748563581547/AnsiballZ_edpm_container_manage.py'
Oct 10 10:05:30 compute-1 sudo[233248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:30 compute-1 python3[233250]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct 10 10:05:31 compute-1 podman[233286]: 2025-10-10 10:05:31.174777315 +0000 UTC m=+0.061010682 container create 6594a2242ea0b6d21d9e3dcfd9bb04a710348cb3679522fa85600b84a6df5a8e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, container_name=nova_compute, tcib_managed=true)
Oct 10 10:05:31 compute-1 podman[233286]: 2025-10-10 10:05:31.143671613 +0000 UTC m=+0.029904980 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct 10 10:05:31 compute-1 python3[233250]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844 kolla_start
Oct 10 10:05:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.002000054s ======
Oct 10 10:05:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:31.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Oct 10 10:05:31 compute-1 sudo[233248]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:31 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:31.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:31 compute-1 ceph-mon[79167]: pgmap v579: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:31 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:05:31 compute-1 sudo[233476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxnxmxqzcwyevwkluxunfcatyulgodmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090731.6076076-5003-62985049844412/AnsiballZ_stat.py'
Oct 10 10:05:31 compute-1 sudo[233476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:32 compute-1 python3.9[233478]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:05:32 compute-1 sudo[233476]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:05:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:32 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:32 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:32 compute-1 sudo[233630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srxpjcdfmelvzoolsljrmgvdkqngmvsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090732.5545328-5030-85912389476342/AnsiballZ_file.py'
Oct 10 10:05:32 compute-1 sudo[233630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:33 compute-1 python3.9[233632]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:05:33 compute-1 sudo[233630]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:05:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:33.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:05:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:33 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:05:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:33.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:05:33 compute-1 ceph-mon[79167]: pgmap v580: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:05:33 compute-1 sudo[233782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpgumtelcggkhkrcekglafcrtmhxumvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090733.2445624-5030-68184296990557/AnsiballZ_copy.py'
Oct 10 10:05:33 compute-1 sudo[233782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:34 compute-1 python3.9[233784]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760090733.2445624-5030-68184296990557/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 10 10:05:34 compute-1 sudo[233782]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:34 compute-1 sudo[233858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgzewfxwozusoxockmozneenulzvvcnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090733.2445624-5030-68184296990557/AnsiballZ_systemd.py'
Oct 10 10:05:34 compute-1 sudo[233858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:34 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:34 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:34 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:34 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:34 compute-1 python3.9[233860]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 10 10:05:34 compute-1 systemd[1]: Reloading.
Oct 10 10:05:34 compute-1 systemd-rc-local-generator[233887]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:05:34 compute-1 systemd-sysv-generator[233890]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:05:35 compute-1 sudo[233858]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:05:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:35.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:05:35 compute-1 sudo[233969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbjnhdziuzwazcfcorrptzgdjzyhphge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090733.2445624-5030-68184296990557/AnsiballZ_systemd.py'
Oct 10 10:05:35 compute-1 sudo[233969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:35 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:05:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:35.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:05:35 compute-1 python3.9[233971]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 10 10:05:35 compute-1 sudo[233972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:05:35 compute-1 sudo[233972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:05:35 compute-1 sudo[233972]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:35 compute-1 systemd[1]: Reloading.
Oct 10 10:05:35 compute-1 ceph-mon[79167]: pgmap v581: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:35 compute-1 systemd-rc-local-generator[234025]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 10 10:05:35 compute-1 systemd-sysv-generator[234028]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 10 10:05:36 compute-1 systemd[1]: Starting nova_compute container...
Oct 10 10:05:36 compute-1 systemd[1]: Started libcrun container.
Oct 10 10:05:36 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b981e6fe522e02b6e8cb5b05f3134ce9c4a909c2753bd0b3e72e520ad3a3ed45/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:36 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b981e6fe522e02b6e8cb5b05f3134ce9c4a909c2753bd0b3e72e520ad3a3ed45/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:36 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b981e6fe522e02b6e8cb5b05f3134ce9c4a909c2753bd0b3e72e520ad3a3ed45/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:36 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b981e6fe522e02b6e8cb5b05f3134ce9c4a909c2753bd0b3e72e520ad3a3ed45/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:36 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b981e6fe522e02b6e8cb5b05f3134ce9c4a909c2753bd0b3e72e520ad3a3ed45/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:36 compute-1 podman[234036]: 2025-10-10 10:05:36.208443451 +0000 UTC m=+0.119580474 container init 6594a2242ea0b6d21d9e3dcfd9bb04a710348cb3679522fa85600b84a6df5a8e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 10 10:05:36 compute-1 podman[234036]: 2025-10-10 10:05:36.220715437 +0000 UTC m=+0.131852430 container start 6594a2242ea0b6d21d9e3dcfd9bb04a710348cb3679522fa85600b84a6df5a8e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 10:05:36 compute-1 podman[234036]: nova_compute
Oct 10 10:05:36 compute-1 nova_compute[234052]: + sudo -E kolla_set_configs
Oct 10 10:05:36 compute-1 systemd[1]: Started nova_compute container.
Oct 10 10:05:36 compute-1 sudo[233969]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:36 compute-1 nova_compute[234052]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 10 10:05:36 compute-1 nova_compute[234052]: INFO:__main__:Validating config file
Oct 10 10:05:36 compute-1 nova_compute[234052]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 10 10:05:36 compute-1 nova_compute[234052]: INFO:__main__:Copying service configuration files
Oct 10 10:05:36 compute-1 nova_compute[234052]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 10 10:05:36 compute-1 nova_compute[234052]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 10 10:05:36 compute-1 nova_compute[234052]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 10 10:05:36 compute-1 nova_compute[234052]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 10 10:05:36 compute-1 nova_compute[234052]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 10 10:05:36 compute-1 nova_compute[234052]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 10 10:05:36 compute-1 nova_compute[234052]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 10 10:05:36 compute-1 nova_compute[234052]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 10 10:05:36 compute-1 nova_compute[234052]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 10 10:05:36 compute-1 nova_compute[234052]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 10 10:05:36 compute-1 nova_compute[234052]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 10 10:05:36 compute-1 nova_compute[234052]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 10 10:05:36 compute-1 nova_compute[234052]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 10 10:05:36 compute-1 nova_compute[234052]: INFO:__main__:Deleting /etc/ceph
Oct 10 10:05:36 compute-1 nova_compute[234052]: INFO:__main__:Creating directory /etc/ceph
Oct 10 10:05:36 compute-1 nova_compute[234052]: INFO:__main__:Setting permission for /etc/ceph
Oct 10 10:05:36 compute-1 nova_compute[234052]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 10 10:05:36 compute-1 nova_compute[234052]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 10 10:05:36 compute-1 nova_compute[234052]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 10 10:05:36 compute-1 nova_compute[234052]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 10 10:05:36 compute-1 nova_compute[234052]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 10 10:05:36 compute-1 nova_compute[234052]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 10 10:05:36 compute-1 nova_compute[234052]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 10 10:05:36 compute-1 nova_compute[234052]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 10 10:05:36 compute-1 nova_compute[234052]: INFO:__main__:Writing out command to execute
Oct 10 10:05:36 compute-1 nova_compute[234052]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 10 10:05:36 compute-1 nova_compute[234052]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 10 10:05:36 compute-1 nova_compute[234052]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 10 10:05:36 compute-1 nova_compute[234052]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 10 10:05:36 compute-1 nova_compute[234052]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 10 10:05:36 compute-1 nova_compute[234052]: ++ cat /run_command
Oct 10 10:05:36 compute-1 nova_compute[234052]: + CMD=nova-compute
Oct 10 10:05:36 compute-1 nova_compute[234052]: + ARGS=
Oct 10 10:05:36 compute-1 nova_compute[234052]: + sudo kolla_copy_cacerts
Oct 10 10:05:36 compute-1 nova_compute[234052]: + [[ ! -n '' ]]
Oct 10 10:05:36 compute-1 nova_compute[234052]: + . kolla_extend_start
Oct 10 10:05:36 compute-1 nova_compute[234052]: + echo 'Running command: '\''nova-compute'\'''
Oct 10 10:05:36 compute-1 nova_compute[234052]: Running command: 'nova-compute'
Oct 10 10:05:36 compute-1 nova_compute[234052]: + umask 0022
Oct 10 10:05:36 compute-1 nova_compute[234052]: + exec nova-compute
Oct 10 10:05:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:36 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:36 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:37.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:05:37 compute-1 python3.9[234213]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:05:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:37 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:05:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:37.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:05:37 compute-1 ceph-mon[79167]: pgmap v582: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:38 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:38 compute-1 python3.9[234365]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:05:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:38 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:38 compute-1 nova_compute[234052]: 2025-10-10 10:05:38.629 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 10 10:05:38 compute-1 nova_compute[234052]: 2025-10-10 10:05:38.629 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 10 10:05:38 compute-1 nova_compute[234052]: 2025-10-10 10:05:38.629 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 10 10:05:38 compute-1 nova_compute[234052]: 2025-10-10 10:05:38.630 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 10 10:05:38 compute-1 nova_compute[234052]: 2025-10-10 10:05:38.774 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:05:38 compute-1 nova_compute[234052]: 2025-10-10 10:05:38.807 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:05:39 compute-1 sudo[234493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.292 2 INFO nova.virt.driver [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 10 10:05:39 compute-1 sudo[234493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:05:39 compute-1 sudo[234493]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:39.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:39 compute-1 sudo[234545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:05:39 compute-1 sudo[234545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.467 2 INFO nova.compute.provider_config [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.479 2 DEBUG oslo_concurrency.lockutils [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.480 2 DEBUG oslo_concurrency.lockutils [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.480 2 DEBUG oslo_concurrency.lockutils [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.480 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.480 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.481 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.481 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.481 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.481 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.481 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.481 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.482 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.482 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.482 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.482 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.482 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.482 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.482 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.482 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.483 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.483 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.483 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.483 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.483 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.483 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.483 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.484 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.484 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.484 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.484 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.484 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.484 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.484 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.485 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.485 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.485 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.485 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.485 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.485 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.485 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.486 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.486 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.486 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.486 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.486 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.486 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.486 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.487 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.487 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.487 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.487 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.487 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.487 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.487 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.488 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.488 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.488 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.488 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.488 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.488 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.488 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.489 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.489 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.489 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.489 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.489 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.489 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.489 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.490 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.490 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.490 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.490 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.490 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.490 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.490 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.490 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.491 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.491 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.491 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.491 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.491 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.491 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.491 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.492 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.492 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.492 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.492 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.492 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.492 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.492 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.493 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.493 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.493 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.493 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.493 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.493 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.493 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.493 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.494 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.494 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.494 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.494 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.494 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.494 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.494 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.495 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.495 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.495 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.495 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.495 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.495 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.496 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.496 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.496 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.496 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.496 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.496 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.496 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.496 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.497 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.497 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.497 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.497 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.497 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.497 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.497 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.498 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.498 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.498 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.498 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.498 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.498 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.498 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.499 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.499 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.499 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.499 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.499 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.499 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.499 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.500 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.500 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.500 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.500 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.500 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.500 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.500 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.500 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.501 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.501 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.501 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.501 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.501 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.501 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.502 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.502 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.502 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.502 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.502 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.502 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.502 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.503 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.503 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.503 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.503 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.503 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.503 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.503 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.504 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.504 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.504 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.504 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.504 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.504 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.504 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.505 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.505 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.505 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.505 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.505 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.505 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.505 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.506 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.506 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.506 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.506 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.506 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.506 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.506 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.507 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.507 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.507 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.507 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.507 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.507 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.507 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.508 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.508 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.508 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.508 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.508 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.508 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.508 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.509 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.509 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.509 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.509 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.509 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.509 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.509 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.510 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.510 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.510 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.510 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.510 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.510 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.510 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.511 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.511 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.511 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.511 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.511 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.511 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.512 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.512 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.512 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.512 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.512 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.512 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.512 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.512 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.513 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.513 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.513 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.513 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.513 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.513 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.513 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.514 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.514 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.514 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.514 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.514 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.514 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.514 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.515 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.515 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.515 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.515 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.515 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.515 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.516 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.516 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.516 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.516 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.516 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.516 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.516 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.517 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.517 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.517 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.517 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.517 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.517 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.517 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.518 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.518 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.518 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.518 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.518 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.518 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.518 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.519 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.519 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.519 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.519 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.519 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.519 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.519 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.520 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.520 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.520 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.520 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.520 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.520 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.520 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.521 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.521 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.521 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.521 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.521 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.521 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.521 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.522 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.522 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.522 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.522 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.522 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.522 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.522 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.523 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.523 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.523 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.523 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.523 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.523 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.523 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.524 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.524 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.524 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.524 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.524 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.524 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.524 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.525 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.525 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.525 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.525 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.525 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.525 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.525 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.525 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.526 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.526 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.526 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.526 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.526 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.526 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.526 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.527 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.527 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.527 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.527 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.527 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.527 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.527 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.528 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.528 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.528 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.528 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.528 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.528 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.528 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.529 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.529 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.529 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.529 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.529 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.529 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.529 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.530 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.530 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.530 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.530 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.530 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.530 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.530 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.531 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.531 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.531 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.531 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.531 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.532 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.532 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.532 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.532 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.532 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:39 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af000a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.532 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.532 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.533 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.533 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.533 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.533 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.533 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.533 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.533 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.534 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.534 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.534 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.534 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.534 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.534 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.534 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.535 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.535 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.535 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.535 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.535 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.535 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.535 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.536 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.536 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.536 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.536 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.536 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.536 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.536 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.537 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.537 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.537 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.537 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.537 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.537 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.537 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.538 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.538 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.538 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.538 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.538 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.538 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.538 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.539 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.539 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.539 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.539 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.539 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.539 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.539 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.540 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.540 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.540 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.540 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.540 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.540 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.540 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.541 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.541 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.541 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.541 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.541 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.541 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.541 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.542 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.542 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.542 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.542 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.542 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.542 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.542 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.543 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.543 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.543 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.543 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.543 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.543 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.543 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.544 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.544 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.544 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.544 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.544 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.544 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.544 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.545 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.545 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.545 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.545 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.545 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.545 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 python3.9[234543]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.545 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.546 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.546 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.546 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.546 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.546 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.546 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.546 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.547 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.547 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.547 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.547 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.547 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.547 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.548 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.548 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.548 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.548 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.548 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.548 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.548 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.549 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.549 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.549 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.549 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.549 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.549 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.549 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.550 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.550 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.550 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.550 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.550 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.550 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.550 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.551 2 WARNING oslo_config.cfg [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 10 10:05:39 compute-1 nova_compute[234052]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 10 10:05:39 compute-1 nova_compute[234052]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 10 10:05:39 compute-1 nova_compute[234052]: and ``live_migration_inbound_addr`` respectively.
Oct 10 10:05:39 compute-1 nova_compute[234052]: ).  Its value may be silently ignored in the future.
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.551 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.551 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.551 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.551 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.552 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.552 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.552 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.552 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.552 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.553 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.553 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.553 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.553 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.553 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.554 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.554 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.554 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.554 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.554 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.rbd_secret_uuid        = 21f084a3-af34-5230-afe4-ea5cd24a55f4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.555 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.555 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.555 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.555 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.555 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.555 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.556 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.556 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.556 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.556 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.556 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.556 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.557 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.557 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.557 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.557 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.557 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.558 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.558 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.558 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.558 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.558 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.558 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.559 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.559 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.559 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.559 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.559 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.560 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.560 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.560 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.560 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.560 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.560 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.561 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.561 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.561 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.561 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.561 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.561 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.561 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.562 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.562 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.562 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.562 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.562 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.562 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.563 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.563 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.563 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.563 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.563 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.563 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.563 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.564 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.564 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.564 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.564 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.564 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.564 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.564 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.565 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.565 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.565 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.565 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.565 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.566 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.566 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.566 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.566 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.566 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.566 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.567 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.567 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.567 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.567 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.567 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.567 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.567 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.568 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.568 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.568 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.568 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.568 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.568 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.569 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.569 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.569 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.569 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.569 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.570 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.570 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.570 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.570 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.570 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.571 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.571 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.571 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.571 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.571 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.572 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.572 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.572 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.572 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.572 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.573 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.573 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.573 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.573 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.573 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.574 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.574 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.574 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.574 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.575 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.575 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.575 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.575 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.575 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.576 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.576 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.576 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.576 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.576 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.577 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.577 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.577 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.577 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.577 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.578 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.578 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.578 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.578 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.578 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.578 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.579 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.579 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.579 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.579 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.579 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.579 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.579 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.580 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.580 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.580 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.580 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.580 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.580 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.581 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.581 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.581 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.581 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.581 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.581 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.582 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.582 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.582 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.582 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.582 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.583 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.583 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.583 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.583 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.583 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.583 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.584 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.584 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.584 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.584 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.584 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.584 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.584 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.585 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.585 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.585 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.585 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.585 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.585 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.585 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.586 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.586 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.586 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.586 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.586 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.586 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.586 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.587 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.587 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.587 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.587 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.587 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.587 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.587 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.588 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.588 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.588 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.588 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.588 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.588 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.588 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.589 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.589 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.589 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.589 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.589 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.589 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.589 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.590 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.590 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.590 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.590 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.590 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.590 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.590 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.590 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.591 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.591 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.591 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.591 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.591 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.591 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.591 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.592 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.592 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.592 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.592 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.592 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.592 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.592 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.593 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.593 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.593 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.593 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.593 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.593 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.594 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.594 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.594 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.594 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.594 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.594 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.594 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.595 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.595 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.595 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.595 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.595 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.595 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.595 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.596 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.596 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.596 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.596 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.596 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.596 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.597 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.597 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.597 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.597 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.597 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.597 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.597 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.598 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.598 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.598 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.598 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.598 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.598 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.599 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.599 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.599 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.599 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.599 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.599 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.600 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.600 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.600 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.600 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.600 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.600 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.601 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.601 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.601 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.601 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.601 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.601 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.601 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.602 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.602 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.602 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.602 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.602 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.602 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.602 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.603 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.603 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.603 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.603 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.603 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.603 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.603 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.604 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.604 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.604 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.604 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.604 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.604 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.605 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.605 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.605 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.605 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.605 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.605 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.605 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.606 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.606 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.606 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.606 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.606 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.606 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.606 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.607 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.607 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.607 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.607 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.607 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.607 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.608 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.608 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.608 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.608 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.608 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.608 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.609 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.609 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.609 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.609 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.609 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.609 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.610 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.610 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.610 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.610 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.610 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.611 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.611 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.611 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.611 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.611 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.611 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.611 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.612 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.612 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.612 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.612 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.612 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.612 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.613 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.613 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.613 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.613 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.613 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.613 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.614 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:39.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.614 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.614 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.614 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.614 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.615 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.615 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.615 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.615 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.615 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.615 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.616 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.616 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.616 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.616 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.616 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.617 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.617 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.617 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.617 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.617 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.617 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.618 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.618 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.618 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.618 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.618 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.619 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.619 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.619 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.619 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.619 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.620 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.620 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.620 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.620 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.620 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.620 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.621 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.621 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.621 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.621 2 DEBUG oslo_service.service [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.622 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.639 2 DEBUG nova.virt.libvirt.host [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.639 2 DEBUG nova.virt.libvirt.host [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.640 2 DEBUG nova.virt.libvirt.host [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.640 2 DEBUG nova.virt.libvirt.host [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Oct 10 10:05:39 compute-1 systemd[1]: Starting libvirt QEMU daemon...
Oct 10 10:05:39 compute-1 systemd[1]: Started libvirt QEMU daemon.
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.702 2 DEBUG nova.virt.libvirt.host [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fd597d93e50> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.704 2 DEBUG nova.virt.libvirt.host [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fd597d93e50> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.705 2 INFO nova.virt.libvirt.driver [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Connection event '1' reason 'None'
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.719 2 WARNING nova.virt.libvirt.driver [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Cannot update service status on host "compute-1.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.
Oct 10 10:05:39 compute-1 nova_compute[234052]: 2025-10-10 10:05:39.719 2 DEBUG nova.virt.libvirt.volume.mount [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Oct 10 10:05:39 compute-1 ceph-mon[79167]: pgmap v583: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:39 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:05:39 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:05:39 compute-1 sudo[234545]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:40 compute-1 podman[234758]: 2025-10-10 10:05:40.165747393 +0000 UTC m=+0.094324464 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=multipathd)
Oct 10 10:05:40 compute-1 podman[234752]: 2025-10-10 10:05:40.167705037 +0000 UTC m=+0.095650800 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 10 10:05:40 compute-1 sudo[234840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mehdtiglvnvvyahnznklmxovncdnrzlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090739.8828075-5210-204072674130872/AnsiballZ_podman_container.py'
Oct 10 10:05:40 compute-1 sudo[234840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:40 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:40 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:40 compute-1 python3.9[234843]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 10 10:05:40 compute-1 nova_compute[234052]: 2025-10-10 10:05:40.582 2 INFO nova.virt.libvirt.host [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Libvirt host capabilities <capabilities>
Oct 10 10:05:40 compute-1 nova_compute[234052]: 
Oct 10 10:05:40 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <host>
Oct 10 10:05:40 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <uuid>b3ce5971-8a21-4607-a1ce-4c5a00fcffdd</uuid>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <cpu>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <arch>x86_64</arch>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model>EPYC-Rome-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <vendor>AMD</vendor>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <microcode version='16777317'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <signature family='23' model='49' stepping='0'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <maxphysaddr mode='emulate' bits='40'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature name='x2apic'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature name='tsc-deadline'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature name='osxsave'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature name='hypervisor'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature name='tsc_adjust'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature name='spec-ctrl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature name='stibp'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature name='arch-capabilities'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature name='ssbd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature name='cmp_legacy'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature name='topoext'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature name='virt-ssbd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature name='lbrv'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature name='tsc-scale'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature name='vmcb-clean'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature name='pause-filter'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature name='pfthreshold'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature name='svme-addr-chk'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature name='rdctl-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature name='skip-l1dfl-vmentry'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature name='mds-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature name='pschange-mc-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <pages unit='KiB' size='4'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <pages unit='KiB' size='2048'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <pages unit='KiB' size='1048576'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </cpu>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <power_management>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <suspend_mem/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </power_management>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <iommu support='no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <migration_features>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <live/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <uri_transports>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <uri_transport>tcp</uri_transport>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <uri_transport>rdma</uri_transport>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </uri_transports>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </migration_features>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <topology>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <cells num='1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <cell id='0'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:           <memory unit='KiB'>7864356</memory>
Oct 10 10:05:40 compute-1 nova_compute[234052]:           <pages unit='KiB' size='4'>1966089</pages>
Oct 10 10:05:40 compute-1 nova_compute[234052]:           <pages unit='KiB' size='2048'>0</pages>
Oct 10 10:05:40 compute-1 nova_compute[234052]:           <pages unit='KiB' size='1048576'>0</pages>
Oct 10 10:05:40 compute-1 nova_compute[234052]:           <distances>
Oct 10 10:05:40 compute-1 nova_compute[234052]:             <sibling id='0' value='10'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:           </distances>
Oct 10 10:05:40 compute-1 nova_compute[234052]:           <cpus num='8'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:           </cpus>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         </cell>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </cells>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </topology>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <cache>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </cache>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <secmodel>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model>selinux</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <doi>0</doi>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </secmodel>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <secmodel>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model>dac</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <doi>0</doi>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <baselabel type='kvm'>+107:+107</baselabel>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <baselabel type='qemu'>+107:+107</baselabel>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </secmodel>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   </host>
Oct 10 10:05:40 compute-1 nova_compute[234052]: 
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <guest>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <os_type>hvm</os_type>
Oct 10 10:05:40 compute-1 sudo[234840]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <arch name='i686'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <wordsize>32</wordsize>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <domain type='qemu'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <domain type='kvm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </arch>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <features>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <pae/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <nonpae/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <acpi default='on' toggle='yes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <apic default='on' toggle='no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <cpuselection/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <deviceboot/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <disksnapshot default='on' toggle='no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <externalSnapshot/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </features>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   </guest>
Oct 10 10:05:40 compute-1 nova_compute[234052]: 
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <guest>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <os_type>hvm</os_type>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <arch name='x86_64'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <wordsize>64</wordsize>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <domain type='qemu'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <domain type='kvm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </arch>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <features>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <acpi default='on' toggle='yes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <apic default='on' toggle='no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <cpuselection/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <deviceboot/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <disksnapshot default='on' toggle='no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <externalSnapshot/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </features>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   </guest>
Oct 10 10:05:40 compute-1 nova_compute[234052]: 
Oct 10 10:05:40 compute-1 nova_compute[234052]: </capabilities>
Oct 10 10:05:40 compute-1 nova_compute[234052]: 
Oct 10 10:05:40 compute-1 nova_compute[234052]: 2025-10-10 10:05:40.593 2 DEBUG nova.virt.libvirt.host [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 10 10:05:40 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:40 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:40 compute-1 nova_compute[234052]: 2025-10-10 10:05:40.632 2 DEBUG nova.virt.libvirt.host [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 10 10:05:40 compute-1 nova_compute[234052]: <domainCapabilities>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <path>/usr/libexec/qemu-kvm</path>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <domain>kvm</domain>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <arch>i686</arch>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <vcpu max='240'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <iothreads supported='yes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <os supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <enum name='firmware'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <loader supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='type'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>rom</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>pflash</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='readonly'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>yes</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>no</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='secure'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>no</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </loader>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   </os>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <cpu>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <mode name='host-passthrough' supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='hostPassthroughMigratable'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>on</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>off</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </mode>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <mode name='maximum' supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='maximumMigratable'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>on</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>off</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </mode>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <mode name='host-model' supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <vendor>AMD</vendor>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='x2apic'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='tsc-deadline'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='hypervisor'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='tsc_adjust'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='spec-ctrl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='stibp'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='arch-capabilities'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='ssbd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='cmp_legacy'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='overflow-recov'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='succor'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='ibrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='amd-ssbd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='virt-ssbd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='lbrv'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='tsc-scale'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='vmcb-clean'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='flushbyasid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='pause-filter'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='pfthreshold'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='svme-addr-chk'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='rdctl-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='mds-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='pschange-mc-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='gds-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='rfds-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='disable' name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </mode>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <mode name='custom' supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Broadwell'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Broadwell-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Broadwell-noTSX'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Broadwell-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Broadwell-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Broadwell-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Broadwell-v4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cascadelake-Server'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cascadelake-Server-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cascadelake-Server-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cascadelake-Server-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cascadelake-Server-v4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cascadelake-Server-v5'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cooperlake'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cooperlake-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cooperlake-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Denverton'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mpx'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Denverton-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mpx'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Denverton-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Denverton-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Dhyana-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Genoa'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amd-psfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='auto-ibrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='stibp-always-on'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Genoa-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amd-psfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='auto-ibrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='stibp-always-on'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Milan'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Milan-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Milan-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amd-psfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='stibp-always-on'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Rome'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Rome-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Rome-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Rome-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-v4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='GraniteRapids'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='prefetchiti'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='GraniteRapids-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='prefetchiti'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='GraniteRapids-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx10'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx10-128'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx10-256'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx10-512'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='prefetchiti'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Haswell'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Haswell-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Haswell-noTSX'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Haswell-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Haswell-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Haswell-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Haswell-v4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server-noTSX'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server-v4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server-v5'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server-v6'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server-v7'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='IvyBridge'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='IvyBridge-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='IvyBridge-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='IvyBridge-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='KnightsMill'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-4fmaps'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-4vnniw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512er'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512pf'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='KnightsMill-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-4fmaps'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-4vnniw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512er'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512pf'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Opteron_G4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fma4'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xop'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Opteron_G4-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fma4'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xop'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Opteron_G5'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fma4'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tbm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xop'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Opteron_G5-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fma4'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tbm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xop'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='SapphireRapids'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='SapphireRapids-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='SapphireRapids-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='SapphireRapids-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='SierraForest'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-ne-convert'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cmpccxadd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='SierraForest-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-ne-convert'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cmpccxadd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Client'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Client-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Client-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Client-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Client-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Client-v4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Server'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Server-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Server-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Server-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Server-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Server-v4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Server-v5'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Snowridge'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='core-capability'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mpx'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='split-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Snowridge-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='core-capability'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mpx'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='split-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Snowridge-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='core-capability'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='split-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Snowridge-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='core-capability'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='split-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Snowridge-v4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='athlon'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='3dnow'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='3dnowext'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='athlon-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='3dnow'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='3dnowext'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='core2duo'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='core2duo-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='coreduo'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='coreduo-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='n270'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='n270-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='phenom'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='3dnow'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='3dnowext'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='phenom-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='3dnow'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='3dnowext'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </mode>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   </cpu>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <memoryBacking supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <enum name='sourceType'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <value>file</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <value>anonymous</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <value>memfd</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   </memoryBacking>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <devices>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <disk supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='diskDevice'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>disk</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>cdrom</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>floppy</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>lun</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='bus'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>ide</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>fdc</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>scsi</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtio</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>usb</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>sata</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='model'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtio</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtio-transitional</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtio-non-transitional</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </disk>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <graphics supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='type'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>vnc</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>egl-headless</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>dbus</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </graphics>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <video supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='modelType'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>vga</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>cirrus</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtio</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>none</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>bochs</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>ramfb</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </video>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <hostdev supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='mode'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>subsystem</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='startupPolicy'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>default</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>mandatory</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>requisite</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>optional</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='subsysType'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>usb</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>pci</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>scsi</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='capsType'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='pciBackend'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </hostdev>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <rng supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='model'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtio</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtio-transitional</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtio-non-transitional</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='backendModel'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>random</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>egd</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>builtin</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </rng>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <filesystem supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='driverType'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>path</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>handle</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtiofs</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </filesystem>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <tpm supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='model'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>tpm-tis</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>tpm-crb</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='backendModel'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>emulator</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>external</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='backendVersion'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>2.0</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </tpm>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <redirdev supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='bus'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>usb</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </redirdev>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <channel supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='type'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>pty</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>unix</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </channel>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <crypto supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='model'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='type'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>qemu</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='backendModel'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>builtin</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </crypto>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <interface supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='backendType'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>default</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>passt</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </interface>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <panic supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='model'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>isa</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>hyperv</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </panic>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   </devices>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <features>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <gic supported='no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <vmcoreinfo supported='yes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <genid supported='yes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <backingStoreInput supported='yes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <backup supported='yes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <async-teardown supported='yes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <ps2 supported='yes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <sev supported='no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <sgx supported='no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <hyperv supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='features'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>relaxed</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>vapic</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>spinlocks</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>vpindex</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>runtime</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>synic</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>stimer</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>reset</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>vendor_id</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>frequencies</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>reenlightenment</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>tlbflush</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>ipi</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>avic</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>emsr_bitmap</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>xmm_input</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </hyperv>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <launchSecurity supported='no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   </features>
Oct 10 10:05:40 compute-1 nova_compute[234052]: </domainCapabilities>
Oct 10 10:05:40 compute-1 nova_compute[234052]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 10 10:05:40 compute-1 nova_compute[234052]: 2025-10-10 10:05:40.640 2 DEBUG nova.virt.libvirt.host [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 10 10:05:40 compute-1 nova_compute[234052]: <domainCapabilities>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <path>/usr/libexec/qemu-kvm</path>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <domain>kvm</domain>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <arch>i686</arch>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <vcpu max='4096'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <iothreads supported='yes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <os supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <enum name='firmware'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <loader supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='type'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>rom</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>pflash</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='readonly'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>yes</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>no</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='secure'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>no</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </loader>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   </os>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <cpu>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <mode name='host-passthrough' supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='hostPassthroughMigratable'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>on</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>off</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </mode>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <mode name='maximum' supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='maximumMigratable'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>on</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>off</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </mode>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <mode name='host-model' supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <vendor>AMD</vendor>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='x2apic'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='tsc-deadline'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='hypervisor'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='tsc_adjust'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='spec-ctrl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='stibp'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='arch-capabilities'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='ssbd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='cmp_legacy'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='overflow-recov'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='succor'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='ibrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='amd-ssbd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='virt-ssbd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='lbrv'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='tsc-scale'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='vmcb-clean'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='flushbyasid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='pause-filter'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='pfthreshold'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='svme-addr-chk'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='rdctl-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='mds-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='pschange-mc-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='gds-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='rfds-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='disable' name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </mode>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <mode name='custom' supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Broadwell'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Broadwell-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Broadwell-noTSX'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Broadwell-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Broadwell-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Broadwell-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Broadwell-v4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cascadelake-Server'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cascadelake-Server-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cascadelake-Server-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cascadelake-Server-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cascadelake-Server-v4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cascadelake-Server-v5'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cooperlake'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cooperlake-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cooperlake-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Denverton'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mpx'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Denverton-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mpx'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Denverton-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Denverton-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Dhyana-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Genoa'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amd-psfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='auto-ibrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='stibp-always-on'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Genoa-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amd-psfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='auto-ibrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='stibp-always-on'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Milan'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Milan-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Milan-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amd-psfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='stibp-always-on'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Rome'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Rome-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Rome-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Rome-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-v4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='GraniteRapids'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='prefetchiti'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='GraniteRapids-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='prefetchiti'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='GraniteRapids-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx10'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx10-128'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx10-256'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx10-512'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='prefetchiti'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Haswell'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Haswell-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Haswell-noTSX'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Haswell-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Haswell-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Haswell-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Haswell-v4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server-noTSX'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server-v4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server-v5'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server-v6'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server-v7'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='IvyBridge'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='IvyBridge-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='IvyBridge-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='IvyBridge-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='KnightsMill'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-4fmaps'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-4vnniw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512er'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512pf'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='KnightsMill-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-4fmaps'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-4vnniw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512er'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512pf'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Opteron_G4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fma4'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xop'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Opteron_G4-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fma4'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xop'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Opteron_G5'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fma4'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tbm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xop'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Opteron_G5-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fma4'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tbm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xop'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='SapphireRapids'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='SapphireRapids-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='SapphireRapids-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='SapphireRapids-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='SierraForest'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-ne-convert'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cmpccxadd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='SierraForest-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-ne-convert'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cmpccxadd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Client'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Client-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Client-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Client-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Client-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Client-v4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Server'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Server-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Server-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Server-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Server-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Server-v4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Server-v5'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Snowridge'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='core-capability'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mpx'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='split-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Snowridge-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='core-capability'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mpx'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='split-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Snowridge-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='core-capability'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='split-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Snowridge-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='core-capability'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='split-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Snowridge-v4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='athlon'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='3dnow'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='3dnowext'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='athlon-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='3dnow'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='3dnowext'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='core2duo'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='core2duo-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='coreduo'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='coreduo-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='n270'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='n270-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='phenom'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='3dnow'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='3dnowext'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='phenom-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='3dnow'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='3dnowext'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </mode>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   </cpu>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <memoryBacking supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <enum name='sourceType'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <value>file</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <value>anonymous</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <value>memfd</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   </memoryBacking>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <devices>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <disk supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='diskDevice'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>disk</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>cdrom</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>floppy</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>lun</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='bus'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>fdc</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>scsi</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtio</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>usb</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>sata</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='model'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtio</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtio-transitional</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtio-non-transitional</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </disk>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <graphics supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='type'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>vnc</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>egl-headless</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>dbus</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </graphics>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <video supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='modelType'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>vga</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>cirrus</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtio</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>none</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>bochs</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>ramfb</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </video>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <hostdev supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='mode'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>subsystem</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='startupPolicy'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>default</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>mandatory</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>requisite</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>optional</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='subsysType'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>usb</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>pci</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>scsi</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='capsType'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='pciBackend'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </hostdev>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <rng supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='model'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtio</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtio-transitional</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtio-non-transitional</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='backendModel'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>random</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>egd</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>builtin</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </rng>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <filesystem supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='driverType'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>path</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>handle</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtiofs</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </filesystem>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <tpm supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='model'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>tpm-tis</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>tpm-crb</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='backendModel'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>emulator</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>external</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='backendVersion'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>2.0</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </tpm>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <redirdev supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='bus'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>usb</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </redirdev>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <channel supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='type'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>pty</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>unix</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </channel>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <crypto supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='model'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='type'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>qemu</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='backendModel'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>builtin</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </crypto>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <interface supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='backendType'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>default</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>passt</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </interface>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <panic supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='model'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>isa</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>hyperv</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </panic>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   </devices>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <features>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <gic supported='no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <vmcoreinfo supported='yes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <genid supported='yes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <backingStoreInput supported='yes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <backup supported='yes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <async-teardown supported='yes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <ps2 supported='yes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <sev supported='no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <sgx supported='no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <hyperv supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='features'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>relaxed</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>vapic</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>spinlocks</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>vpindex</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>runtime</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>synic</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>stimer</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>reset</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>vendor_id</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>frequencies</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>reenlightenment</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>tlbflush</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>ipi</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>avic</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>emsr_bitmap</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>xmm_input</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </hyperv>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <launchSecurity supported='no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   </features>
Oct 10 10:05:40 compute-1 nova_compute[234052]: </domainCapabilities>
Oct 10 10:05:40 compute-1 nova_compute[234052]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 10 10:05:40 compute-1 nova_compute[234052]: 2025-10-10 10:05:40.692 2 DEBUG nova.virt.libvirt.host [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 10 10:05:40 compute-1 nova_compute[234052]: 2025-10-10 10:05:40.697 2 DEBUG nova.virt.libvirt.host [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 10 10:05:40 compute-1 nova_compute[234052]: <domainCapabilities>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <path>/usr/libexec/qemu-kvm</path>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <domain>kvm</domain>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <arch>x86_64</arch>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <vcpu max='240'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <iothreads supported='yes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <os supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <enum name='firmware'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <loader supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='type'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>rom</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>pflash</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='readonly'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>yes</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>no</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='secure'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>no</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </loader>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   </os>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <cpu>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <mode name='host-passthrough' supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='hostPassthroughMigratable'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>on</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>off</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </mode>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <mode name='maximum' supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='maximumMigratable'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>on</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>off</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </mode>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <mode name='host-model' supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <vendor>AMD</vendor>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='x2apic'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='tsc-deadline'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='hypervisor'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='tsc_adjust'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='spec-ctrl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='stibp'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='arch-capabilities'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='ssbd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='cmp_legacy'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='overflow-recov'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='succor'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='ibrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='amd-ssbd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='virt-ssbd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='lbrv'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='tsc-scale'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='vmcb-clean'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='flushbyasid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='pause-filter'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='pfthreshold'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='svme-addr-chk'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='rdctl-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='mds-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='pschange-mc-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='gds-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='rfds-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='disable' name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </mode>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <mode name='custom' supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Broadwell'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Broadwell-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Broadwell-noTSX'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Broadwell-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Broadwell-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Broadwell-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Broadwell-v4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cascadelake-Server'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cascadelake-Server-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cascadelake-Server-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cascadelake-Server-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cascadelake-Server-v4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cascadelake-Server-v5'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cooperlake'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cooperlake-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cooperlake-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Denverton'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mpx'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Denverton-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mpx'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Denverton-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Denverton-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Dhyana-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Genoa'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amd-psfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='auto-ibrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='stibp-always-on'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Genoa-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amd-psfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='auto-ibrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='stibp-always-on'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Milan'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Milan-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Milan-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amd-psfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='stibp-always-on'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Rome'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Rome-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Rome-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Rome-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-v4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='GraniteRapids'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='prefetchiti'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='GraniteRapids-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='prefetchiti'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='GraniteRapids-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx10'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx10-128'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx10-256'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx10-512'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='prefetchiti'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Haswell'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Haswell-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Haswell-noTSX'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Haswell-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Haswell-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Haswell-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Haswell-v4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server-noTSX'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server-v4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server-v5'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server-v6'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server-v7'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='IvyBridge'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='IvyBridge-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='IvyBridge-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='IvyBridge-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='KnightsMill'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-4fmaps'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-4vnniw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512er'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512pf'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='KnightsMill-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-4fmaps'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-4vnniw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512er'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512pf'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Opteron_G4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fma4'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xop'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Opteron_G4-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fma4'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xop'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Opteron_G5'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fma4'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tbm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xop'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Opteron_G5-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fma4'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tbm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xop'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='SapphireRapids'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='SapphireRapids-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='SapphireRapids-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='SapphireRapids-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='SierraForest'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-ne-convert'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cmpccxadd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='SierraForest-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-ne-convert'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cmpccxadd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Client'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Client-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Client-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Client-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Client-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Client-v4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Server'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Server-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Server-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Server-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Server-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Server-v4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Server-v5'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Snowridge'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='core-capability'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mpx'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='split-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Snowridge-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='core-capability'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mpx'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='split-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Snowridge-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='core-capability'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='split-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Snowridge-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='core-capability'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='split-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Snowridge-v4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='athlon'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='3dnow'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='3dnowext'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='athlon-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='3dnow'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='3dnowext'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='core2duo'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='core2duo-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='coreduo'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='coreduo-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='n270'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='n270-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='phenom'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='3dnow'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='3dnowext'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='phenom-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='3dnow'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='3dnowext'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </mode>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   </cpu>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <memoryBacking supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <enum name='sourceType'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <value>file</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <value>anonymous</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <value>memfd</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   </memoryBacking>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <devices>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <disk supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='diskDevice'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>disk</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>cdrom</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>floppy</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>lun</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='bus'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>ide</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>fdc</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>scsi</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtio</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>usb</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>sata</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='model'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtio</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtio-transitional</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtio-non-transitional</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </disk>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <graphics supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='type'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>vnc</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>egl-headless</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>dbus</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </graphics>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <video supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='modelType'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>vga</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>cirrus</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtio</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>none</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>bochs</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>ramfb</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </video>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <hostdev supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='mode'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>subsystem</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='startupPolicy'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>default</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>mandatory</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>requisite</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>optional</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='subsysType'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>usb</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>pci</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>scsi</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='capsType'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='pciBackend'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </hostdev>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <rng supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='model'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtio</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtio-transitional</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtio-non-transitional</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='backendModel'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>random</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>egd</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>builtin</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </rng>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <filesystem supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='driverType'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>path</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>handle</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtiofs</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </filesystem>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <tpm supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='model'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>tpm-tis</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>tpm-crb</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='backendModel'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>emulator</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>external</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='backendVersion'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>2.0</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </tpm>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <redirdev supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='bus'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>usb</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </redirdev>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <channel supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='type'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>pty</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>unix</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </channel>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <crypto supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='model'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='type'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>qemu</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='backendModel'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>builtin</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </crypto>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <interface supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='backendType'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>default</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>passt</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </interface>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <panic supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='model'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>isa</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>hyperv</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </panic>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   </devices>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <features>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <gic supported='no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <vmcoreinfo supported='yes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <genid supported='yes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <backingStoreInput supported='yes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <backup supported='yes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <async-teardown supported='yes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <ps2 supported='yes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <sev supported='no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <sgx supported='no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <hyperv supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='features'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>relaxed</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>vapic</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>spinlocks</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>vpindex</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>runtime</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>synic</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>stimer</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>reset</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>vendor_id</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>frequencies</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>reenlightenment</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>tlbflush</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>ipi</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>avic</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>emsr_bitmap</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>xmm_input</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </hyperv>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <launchSecurity supported='no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   </features>
Oct 10 10:05:40 compute-1 nova_compute[234052]: </domainCapabilities>
Oct 10 10:05:40 compute-1 nova_compute[234052]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 10 10:05:40 compute-1 nova_compute[234052]: 2025-10-10 10:05:40.757 2 DEBUG nova.virt.libvirt.host [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 10 10:05:40 compute-1 nova_compute[234052]: <domainCapabilities>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <path>/usr/libexec/qemu-kvm</path>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <domain>kvm</domain>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <arch>x86_64</arch>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <vcpu max='4096'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <iothreads supported='yes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <os supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <enum name='firmware'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <value>efi</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <loader supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='type'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>rom</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>pflash</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='readonly'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>yes</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>no</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='secure'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>yes</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>no</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </loader>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   </os>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <cpu>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <mode name='host-passthrough' supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='hostPassthroughMigratable'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>on</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>off</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </mode>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <mode name='maximum' supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='maximumMigratable'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>on</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>off</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </mode>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <mode name='host-model' supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <vendor>AMD</vendor>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='x2apic'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='tsc-deadline'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='hypervisor'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='tsc_adjust'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='spec-ctrl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='stibp'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='arch-capabilities'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='ssbd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='cmp_legacy'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='overflow-recov'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='succor'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='ibrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='amd-ssbd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='virt-ssbd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='lbrv'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='tsc-scale'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='vmcb-clean'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='flushbyasid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='pause-filter'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='pfthreshold'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='svme-addr-chk'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='rdctl-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='mds-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='pschange-mc-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='gds-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='require' name='rfds-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <feature policy='disable' name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </mode>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <mode name='custom' supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Broadwell'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Broadwell-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Broadwell-noTSX'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Broadwell-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Broadwell-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Broadwell-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Broadwell-v4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cascadelake-Server'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cascadelake-Server-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cascadelake-Server-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cascadelake-Server-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cascadelake-Server-v4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cascadelake-Server-v5'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cooperlake'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cooperlake-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Cooperlake-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Denverton'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mpx'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Denverton-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mpx'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Denverton-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Denverton-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Dhyana-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Genoa'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amd-psfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='auto-ibrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='stibp-always-on'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Genoa-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amd-psfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='auto-ibrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='stibp-always-on'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Milan'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Milan-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Milan-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amd-psfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='stibp-always-on'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Rome'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Rome-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Rome-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-Rome-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='EPYC-v4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='GraniteRapids'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='prefetchiti'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:05:40 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:05:40 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='GraniteRapids-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='prefetchiti'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='GraniteRapids-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx10'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx10-128'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx10-256'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx10-512'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='prefetchiti'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Haswell'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Haswell-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Haswell-noTSX'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Haswell-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Haswell-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Haswell-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Haswell-v4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server-noTSX'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server-v4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server-v5'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server-v6'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Icelake-Server-v7'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='IvyBridge'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='IvyBridge-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='IvyBridge-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='IvyBridge-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='KnightsMill'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-4fmaps'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-4vnniw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512er'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512pf'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='KnightsMill-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-4fmaps'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-4vnniw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512er'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512pf'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Opteron_G4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fma4'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xop'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Opteron_G4-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fma4'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xop'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Opteron_G5'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fma4'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tbm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xop'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Opteron_G5-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fma4'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tbm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xop'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='SapphireRapids'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='SapphireRapids-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='SapphireRapids-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='SapphireRapids-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='amx-tile'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-bf16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-fp16'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bitalg'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrc'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fzrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='la57'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='taa-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xfd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='SierraForest'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-ne-convert'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cmpccxadd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='SierraForest-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-ifma'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-ne-convert'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx-vnni-int8'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cmpccxadd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fbsdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='fsrs'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ibrs-all'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mcdt-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pbrsb-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='psdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='serialize'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vaes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Client'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Client-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Client-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Client-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Client-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Client-v4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Server'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Server-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Server-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Server-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='hle'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='rtm'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Server-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Server-v4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Skylake-Server-v5'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512bw'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512cd'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512dq'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512f'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='avx512vl'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='invpcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pcid'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='pku'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Snowridge'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='core-capability'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mpx'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='split-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Snowridge-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='core-capability'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='mpx'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='split-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Snowridge-v2'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='core-capability'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='split-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Snowridge-v3'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='core-capability'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='split-lock-detect'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='Snowridge-v4'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='cldemote'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='erms'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='gfni'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdir64b'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='movdiri'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='xsaves'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='athlon'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='3dnow'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='3dnowext'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='athlon-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='3dnow'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='3dnowext'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='core2duo'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='core2duo-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='coreduo'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='coreduo-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='n270'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='n270-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='ss'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='phenom'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='3dnow'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='3dnowext'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <blockers model='phenom-v1'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='3dnow'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <feature name='3dnowext'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </blockers>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </mode>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   </cpu>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <memoryBacking supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <enum name='sourceType'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <value>file</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <value>anonymous</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <value>memfd</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   </memoryBacking>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <devices>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <disk supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='diskDevice'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>disk</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>cdrom</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>floppy</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>lun</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='bus'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>fdc</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>scsi</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtio</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>usb</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>sata</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='model'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtio</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtio-transitional</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtio-non-transitional</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </disk>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <graphics supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='type'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>vnc</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>egl-headless</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>dbus</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </graphics>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <video supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='modelType'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>vga</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>cirrus</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtio</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>none</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>bochs</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>ramfb</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </video>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <hostdev supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='mode'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>subsystem</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='startupPolicy'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>default</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>mandatory</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>requisite</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>optional</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='subsysType'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>usb</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>pci</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>scsi</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='capsType'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='pciBackend'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </hostdev>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <rng supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='model'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtio</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtio-transitional</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtio-non-transitional</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='backendModel'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>random</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>egd</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>builtin</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </rng>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <filesystem supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='driverType'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>path</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>handle</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>virtiofs</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </filesystem>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <tpm supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='model'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>tpm-tis</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>tpm-crb</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='backendModel'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>emulator</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>external</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='backendVersion'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>2.0</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </tpm>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <redirdev supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='bus'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>usb</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </redirdev>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <channel supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='type'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>pty</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>unix</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </channel>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <crypto supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='model'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='type'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>qemu</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='backendModel'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>builtin</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </crypto>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <interface supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='backendType'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>default</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>passt</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </interface>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <panic supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='model'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>isa</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>hyperv</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </panic>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   </devices>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   <features>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <gic supported='no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <vmcoreinfo supported='yes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <genid supported='yes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <backingStoreInput supported='yes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <backup supported='yes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <async-teardown supported='yes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <ps2 supported='yes'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <sev supported='no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <sgx supported='no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <hyperv supported='yes'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       <enum name='features'>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>relaxed</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>vapic</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>spinlocks</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>vpindex</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>runtime</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>synic</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>stimer</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>reset</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>vendor_id</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>frequencies</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>reenlightenment</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>tlbflush</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>ipi</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>avic</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>emsr_bitmap</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:         <value>xmm_input</value>
Oct 10 10:05:40 compute-1 nova_compute[234052]:       </enum>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     </hyperv>
Oct 10 10:05:40 compute-1 nova_compute[234052]:     <launchSecurity supported='no'/>
Oct 10 10:05:40 compute-1 nova_compute[234052]:   </features>
Oct 10 10:05:40 compute-1 nova_compute[234052]: </domainCapabilities>
Oct 10 10:05:40 compute-1 nova_compute[234052]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 10 10:05:40 compute-1 nova_compute[234052]: 2025-10-10 10:05:40.820 2 DEBUG nova.virt.libvirt.host [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 10 10:05:40 compute-1 nova_compute[234052]: 2025-10-10 10:05:40.821 2 DEBUG nova.virt.libvirt.host [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 10 10:05:40 compute-1 nova_compute[234052]: 2025-10-10 10:05:40.821 2 DEBUG nova.virt.libvirt.host [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 10 10:05:40 compute-1 nova_compute[234052]: 2025-10-10 10:05:40.821 2 INFO nova.virt.libvirt.host [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Secure Boot support detected
Oct 10 10:05:40 compute-1 nova_compute[234052]: 2025-10-10 10:05:40.825 2 INFO nova.virt.libvirt.driver [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 10 10:05:40 compute-1 nova_compute[234052]: 2025-10-10 10:05:40.826 2 INFO nova.virt.libvirt.driver [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 10 10:05:40 compute-1 nova_compute[234052]: 2025-10-10 10:05:40.844 2 DEBUG nova.virt.libvirt.driver [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Oct 10 10:05:40 compute-1 nova_compute[234052]: 2025-10-10 10:05:40.868 2 INFO nova.virt.node [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Determined node identity c9b2c4a3-cb19-4387-8719-36027e3cdaec from /var/lib/nova/compute_id
Oct 10 10:05:40 compute-1 nova_compute[234052]: 2025-10-10 10:05:40.885 2 WARNING nova.compute.manager [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Compute nodes ['c9b2c4a3-cb19-4387-8719-36027e3cdaec'] for host compute-1.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Oct 10 10:05:40 compute-1 nova_compute[234052]: 2025-10-10 10:05:40.921 2 INFO nova.compute.manager [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Oct 10 10:05:40 compute-1 nova_compute[234052]: 2025-10-10 10:05:40.957 2 WARNING nova.compute.manager [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] No compute node record found for host compute-1.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.
Oct 10 10:05:40 compute-1 nova_compute[234052]: 2025-10-10 10:05:40.957 2 DEBUG oslo_concurrency.lockutils [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:05:40 compute-1 nova_compute[234052]: 2025-10-10 10:05:40.957 2 DEBUG oslo_concurrency.lockutils [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:05:40 compute-1 nova_compute[234052]: 2025-10-10 10:05:40.958 2 DEBUG oslo_concurrency.lockutils [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:05:40 compute-1 nova_compute[234052]: 2025-10-10 10:05:40.958 2 DEBUG nova.compute.resource_tracker [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:05:40 compute-1 nova_compute[234052]: 2025-10-10 10:05:40.958 2 DEBUG oslo_concurrency.processutils [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:05:41 compute-1 sudo[235044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viqdrvcwoshzujcnbxmlzqujjelxxefc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090740.8753471-5234-171179693921207/AnsiballZ_systemd.py'
Oct 10 10:05:41 compute-1 sudo[235044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:41.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:41 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:05:41 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2390731396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:05:41 compute-1 nova_compute[234052]: 2025-10-10 10:05:41.428 2 DEBUG oslo_concurrency.processutils [None req-67cefb99-ca05-42e5-804e-ee88bcde7745 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:05:41 compute-1 systemd[1]: Starting libvirt nodedev daemon...
Oct 10 10:05:41 compute-1 systemd[1]: Started libvirt nodedev daemon.
Oct 10 10:05:41 compute-1 python3.9[235046]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 10 10:05:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:41 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:41 compute-1 systemd[1]: Stopping nova_compute container...
Oct 10 10:05:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:41.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:41 compute-1 nova_compute[234052]: 2025-10-10 10:05:41.659 2 DEBUG oslo_concurrency.lockutils [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:05:41 compute-1 nova_compute[234052]: 2025-10-10 10:05:41.660 2 DEBUG oslo_concurrency.lockutils [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:05:41 compute-1 nova_compute[234052]: 2025-10-10 10:05:41.661 2 DEBUG oslo_concurrency.lockutils [None req-4d8bfc5f-d3ca-410d-b797-2d50b9778e25 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:05:41 compute-1 ceph-mon[79167]: pgmap v584: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:41 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/3983368344' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:05:41 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1003042065' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:05:41 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2390731396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:05:42 compute-1 virtqemud[234629]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct 10 10:05:42 compute-1 virtqemud[234629]: hostname: compute-1
Oct 10 10:05:42 compute-1 virtqemud[234629]: End of file while reading data: Input/output error
Oct 10 10:05:42 compute-1 systemd[1]: libpod-6594a2242ea0b6d21d9e3dcfd9bb04a710348cb3679522fa85600b84a6df5a8e.scope: Deactivated successfully.
Oct 10 10:05:42 compute-1 systemd[1]: libpod-6594a2242ea0b6d21d9e3dcfd9bb04a710348cb3679522fa85600b84a6df5a8e.scope: Consumed 3.558s CPU time.
Oct 10 10:05:42 compute-1 podman[235073]: 2025-10-10 10:05:42.030192203 +0000 UTC m=+0.421326236 container died 6594a2242ea0b6d21d9e3dcfd9bb04a710348cb3679522fa85600b84a6df5a8e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 10 10:05:42 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6594a2242ea0b6d21d9e3dcfd9bb04a710348cb3679522fa85600b84a6df5a8e-userdata-shm.mount: Deactivated successfully.
Oct 10 10:05:42 compute-1 systemd[1]: var-lib-containers-storage-overlay-b981e6fe522e02b6e8cb5b05f3134ce9c4a909c2753bd0b3e72e520ad3a3ed45-merged.mount: Deactivated successfully.
Oct 10 10:05:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:05:42.198 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:05:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:05:42.199 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:05:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:05:42.199 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:05:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:05:42 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:42 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af000a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:42 compute-1 podman[235073]: 2025-10-10 10:05:42.619524215 +0000 UTC m=+1.010658258 container cleanup 6594a2242ea0b6d21d9e3dcfd9bb04a710348cb3679522fa85600b84a6df5a8e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:05:42 compute-1 podman[235073]: nova_compute
Oct 10 10:05:42 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:42 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:42 compute-1 podman[235103]: nova_compute
Oct 10 10:05:42 compute-1 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct 10 10:05:42 compute-1 systemd[1]: Stopped nova_compute container.
Oct 10 10:05:42 compute-1 systemd[1]: Starting nova_compute container...
Oct 10 10:05:42 compute-1 systemd[1]: Started libcrun container.
Oct 10 10:05:42 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b981e6fe522e02b6e8cb5b05f3134ce9c4a909c2753bd0b3e72e520ad3a3ed45/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:42 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b981e6fe522e02b6e8cb5b05f3134ce9c4a909c2753bd0b3e72e520ad3a3ed45/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:42 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b981e6fe522e02b6e8cb5b05f3134ce9c4a909c2753bd0b3e72e520ad3a3ed45/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:42 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b981e6fe522e02b6e8cb5b05f3134ce9c4a909c2753bd0b3e72e520ad3a3ed45/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:42 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b981e6fe522e02b6e8cb5b05f3134ce9c4a909c2753bd0b3e72e520ad3a3ed45/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:42 compute-1 podman[235116]: 2025-10-10 10:05:42.865941421 +0000 UTC m=+0.124717396 container init 6594a2242ea0b6d21d9e3dcfd9bb04a710348cb3679522fa85600b84a6df5a8e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 10:05:42 compute-1 podman[235116]: 2025-10-10 10:05:42.87977989 +0000 UTC m=+0.138555865 container start 6594a2242ea0b6d21d9e3dcfd9bb04a710348cb3679522fa85600b84a6df5a8e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.vendor=CentOS, container_name=nova_compute, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:05:42 compute-1 podman[235116]: nova_compute
Oct 10 10:05:42 compute-1 nova_compute[235132]: + sudo -E kolla_set_configs
Oct 10 10:05:42 compute-1 systemd[1]: Started nova_compute container.
Oct 10 10:05:42 compute-1 sudo[235044]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Validating config file
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Copying service configuration files
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Deleting /etc/ceph
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Creating directory /etc/ceph
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Setting permission for /etc/ceph
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Writing out command to execute
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 10 10:05:42 compute-1 nova_compute[235132]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 10 10:05:43 compute-1 nova_compute[235132]: ++ cat /run_command
Oct 10 10:05:43 compute-1 nova_compute[235132]: + CMD=nova-compute
Oct 10 10:05:43 compute-1 nova_compute[235132]: + ARGS=
Oct 10 10:05:43 compute-1 nova_compute[235132]: + sudo kolla_copy_cacerts
Oct 10 10:05:43 compute-1 nova_compute[235132]: + [[ ! -n '' ]]
Oct 10 10:05:43 compute-1 nova_compute[235132]: + . kolla_extend_start
Oct 10 10:05:43 compute-1 nova_compute[235132]: Running command: 'nova-compute'
Oct 10 10:05:43 compute-1 nova_compute[235132]: + echo 'Running command: '\''nova-compute'\'''
Oct 10 10:05:43 compute-1 nova_compute[235132]: + umask 0022
Oct 10 10:05:43 compute-1 nova_compute[235132]: + exec nova-compute
Oct 10 10:05:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:43.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:43 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:43.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:44 compute-1 ceph-mon[79167]: pgmap v585: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:05:44 compute-1 sudo[235306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hagdwqkptjhvgebsadmynvwrfdborynk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760090743.862454-5261-20702637430810/AnsiballZ_podman_container.py'
Oct 10 10:05:44 compute-1 sudo[235306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:05:44 compute-1 podman[235269]: 2025-10-10 10:05:44.275369394 +0000 UTC m=+0.111801772 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 10 10:05:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:44 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:44 compute-1 python3.9[235315]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 10 10:05:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:44 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af000a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:44 compute-1 systemd[1]: Started libpod-conmon-aa3b7ea7563e4765bb54fd4ce4b9fe769a740174fd50f746c6e754671da4d69c.scope.
Oct 10 10:05:44 compute-1 systemd[1]: Started libcrun container.
Oct 10 10:05:44 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d338469282c94a7a15352352fb29a0feb2aac2728a8b25e442b5f416a8625f1/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:44 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d338469282c94a7a15352352fb29a0feb2aac2728a8b25e442b5f416a8625f1/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:44 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d338469282c94a7a15352352fb29a0feb2aac2728a8b25e442b5f416a8625f1/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct 10 10:05:44 compute-1 podman[235345]: 2025-10-10 10:05:44.778696023 +0000 UTC m=+0.155424436 container init aa3b7ea7563e4765bb54fd4ce4b9fe769a740174fd50f746c6e754671da4d69c (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 10 10:05:44 compute-1 podman[235345]: 2025-10-10 10:05:44.785870039 +0000 UTC m=+0.162598382 container start aa3b7ea7563e4765bb54fd4ce4b9fe769a740174fd50f746c6e754671da4d69c (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:05:44 compute-1 python3.9[235315]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct 10 10:05:44 compute-1 nova_compute_init[235369]: INFO:nova_statedir:Applying nova statedir ownership
Oct 10 10:05:44 compute-1 nova_compute_init[235369]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct 10 10:05:44 compute-1 nova_compute_init[235369]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct 10 10:05:44 compute-1 nova_compute_init[235369]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct 10 10:05:44 compute-1 nova_compute_init[235369]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct 10 10:05:44 compute-1 nova_compute_init[235369]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct 10 10:05:44 compute-1 nova_compute_init[235369]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct 10 10:05:44 compute-1 nova_compute_init[235369]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct 10 10:05:44 compute-1 nova_compute_init[235369]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct 10 10:05:44 compute-1 nova_compute_init[235369]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct 10 10:05:44 compute-1 nova_compute_init[235369]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct 10 10:05:44 compute-1 nova_compute_init[235369]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct 10 10:05:44 compute-1 nova_compute_init[235369]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct 10 10:05:44 compute-1 nova_compute_init[235369]: INFO:nova_statedir:Nova statedir ownership complete
Oct 10 10:05:44 compute-1 systemd[1]: libpod-aa3b7ea7563e4765bb54fd4ce4b9fe769a740174fd50f746c6e754671da4d69c.scope: Deactivated successfully.
Oct 10 10:05:44 compute-1 podman[235370]: 2025-10-10 10:05:44.862055615 +0000 UTC m=+0.042063413 container died aa3b7ea7563e4765bb54fd4ce4b9fe769a740174fd50f746c6e754671da4d69c (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 10:05:44 compute-1 nova_compute[235132]: 2025-10-10 10:05:44.871 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 10 10:05:44 compute-1 nova_compute[235132]: 2025-10-10 10:05:44.872 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 10 10:05:44 compute-1 nova_compute[235132]: 2025-10-10 10:05:44.872 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 10 10:05:44 compute-1 nova_compute[235132]: 2025-10-10 10:05:44.872 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 10 10:05:44 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aa3b7ea7563e4765bb54fd4ce4b9fe769a740174fd50f746c6e754671da4d69c-userdata-shm.mount: Deactivated successfully.
Oct 10 10:05:44 compute-1 systemd[1]: var-lib-containers-storage-overlay-0d338469282c94a7a15352352fb29a0feb2aac2728a8b25e442b5f416a8625f1-merged.mount: Deactivated successfully.
Oct 10 10:05:44 compute-1 podman[235383]: 2025-10-10 10:05:44.941911331 +0000 UTC m=+0.074642085 container cleanup aa3b7ea7563e4765bb54fd4ce4b9fe769a740174fd50f746c6e754671da4d69c (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=edpm, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init)
Oct 10 10:05:44 compute-1 systemd[1]: libpod-conmon-aa3b7ea7563e4765bb54fd4ce4b9fe769a740174fd50f746c6e754671da4d69c.scope: Deactivated successfully.
Oct 10 10:05:44 compute-1 sudo[235306]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.003 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.034 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:05:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:45.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.483 2 INFO nova.virt.driver [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 10 10:05:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:45 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:05:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:45.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.629 2 INFO nova.compute.provider_config [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.647 2 DEBUG oslo_concurrency.lockutils [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.647 2 DEBUG oslo_concurrency.lockutils [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.647 2 DEBUG oslo_concurrency.lockutils [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.648 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.648 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.648 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.648 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.648 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.648 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.648 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.649 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.649 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.649 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.649 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.649 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.649 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.649 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.650 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.650 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.650 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.650 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.650 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.650 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] console_host                   = compute-1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.650 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.651 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.651 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.651 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.651 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.651 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.651 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.652 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.652 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.652 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.652 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.652 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.652 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.652 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.653 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.653 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.653 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.653 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.653 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] host                           = compute-1.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.653 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.653 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.654 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.654 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.654 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.654 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.654 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.654 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.654 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.655 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.655 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.655 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.655 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.655 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.655 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.656 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.656 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.656 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.656 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.656 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.656 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.656 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.657 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.657 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.657 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.657 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.657 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.657 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.657 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.657 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.658 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.658 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.658 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.658 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.658 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.658 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.659 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.659 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.659 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.659 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.659 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.660 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] my_block_storage_ip            = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.660 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] my_ip                          = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.660 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.660 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.660 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.661 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.661 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.661 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.661 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.661 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.662 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.662 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.662 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.662 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.662 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.662 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.663 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.663 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.663 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.663 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.663 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.663 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.663 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.664 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.664 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.664 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.664 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.664 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.664 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 sshd-session[199072]: Connection closed by 192.168.122.30 port 45246
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.664 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.665 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.665 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.665 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.665 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.665 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.665 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.665 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.665 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 sshd-session[199069]: pam_unix(sshd:session): session closed for user zuul
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.666 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.666 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.666 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.666 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.666 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.666 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.666 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.667 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.667 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.667 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.667 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.667 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.667 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.667 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.668 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.668 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.668 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.668 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.668 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.668 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.668 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.669 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.669 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.669 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.669 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.669 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.669 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.669 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.670 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.670 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.670 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.670 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.670 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.670 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.670 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.671 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.671 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.671 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.671 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.671 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.671 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.672 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.672 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.672 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.672 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.672 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.672 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.673 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.673 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.673 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.673 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.673 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.673 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.673 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.674 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.674 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.674 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.674 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.674 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.674 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.674 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.675 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.675 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.675 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.675 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.675 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.676 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.676 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.676 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.676 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.676 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.676 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.677 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.677 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.677 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.677 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.677 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.677 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.677 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.678 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.678 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.678 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.678 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.678 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.678 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.679 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.679 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.679 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.679 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.679 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.679 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.680 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.680 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.680 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.680 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.680 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.680 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.681 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.681 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.681 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.681 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.681 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.681 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.681 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.682 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.682 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.682 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.682 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.682 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.682 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.683 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.683 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 systemd[1]: session-54.scope: Deactivated successfully.
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.683 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.683 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.683 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 systemd[1]: session-54.scope: Consumed 3min 13.374s CPU time.
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.684 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.684 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.684 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.684 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.684 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.684 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.684 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.685 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.685 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.685 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.685 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.685 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.685 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.686 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.686 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 systemd-logind[789]: Session 54 logged out. Waiting for processes to exit.
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.686 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.686 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.686 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.686 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.687 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.687 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.687 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.687 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.688 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.688 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.688 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.688 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.688 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.688 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.689 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.689 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.689 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.689 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.689 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.689 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.690 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 systemd-logind[789]: Removed session 54.
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.690 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.690 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.690 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.690 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.690 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.691 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.691 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.691 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.692 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.692 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.692 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.692 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.692 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.693 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.693 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.693 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.693 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.693 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.693 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.694 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.694 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.694 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.694 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.694 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.694 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.695 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.695 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.695 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.695 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.695 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.695 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.695 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.696 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.696 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.696 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.696 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.696 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.696 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.696 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.697 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.697 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.697 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.697 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.697 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.697 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.697 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.698 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.698 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.698 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.698 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.698 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.698 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.698 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.699 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.699 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.699 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.699 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.699 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.699 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.699 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.700 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.700 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.700 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.700 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.700 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.700 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.700 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.701 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.701 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.701 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.701 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.701 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.701 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.701 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.702 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.702 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.702 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.702 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.702 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.702 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.702 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.703 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.703 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.703 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.703 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.703 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.703 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.704 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.704 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.704 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.704 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.704 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.704 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.705 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.705 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.705 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.705 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.705 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.705 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.705 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.705 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.706 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.706 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.706 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.706 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.706 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.706 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.706 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.707 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.707 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.707 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.707 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.707 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.707 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.707 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.708 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.708 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.708 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.708 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.708 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.708 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.709 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.709 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.709 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.709 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.709 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.709 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.709 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.710 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.710 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.710 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.710 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.710 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.710 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.710 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.711 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.711 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.711 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.711 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.711 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.711 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.711 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.712 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.712 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.712 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.712 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.712 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.712 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.712 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.713 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.713 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.713 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.713 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.713 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.713 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.713 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.713 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.714 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.714 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.714 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.714 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.714 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.714 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.715 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.715 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.715 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.715 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.715 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.715 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.715 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.716 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.716 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.716 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.716 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.716 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.716 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.717 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.717 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.717 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.717 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.717 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.717 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.717 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.718 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.718 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.718 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.718 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.718 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.718 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.719 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.719 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.719 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.719 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.719 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.719 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.719 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.720 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.720 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.720 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.720 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.720 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.720 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.721 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.721 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.721 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.721 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.721 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.721 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.722 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.722 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.722 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.722 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.722 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.722 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.722 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.723 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.723 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.723 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.723 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.723 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.723 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.724 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.724 2 WARNING oslo_config.cfg [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 10 10:05:45 compute-1 nova_compute[235132]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 10 10:05:45 compute-1 nova_compute[235132]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 10 10:05:45 compute-1 nova_compute[235132]: and ``live_migration_inbound_addr`` respectively.
Oct 10 10:05:45 compute-1 nova_compute[235132]: ).  Its value may be silently ignored in the future.
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.724 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.724 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.724 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.725 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.725 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.725 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.725 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.725 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.725 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.726 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.726 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.726 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.726 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.726 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.726 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.727 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.727 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.727 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.727 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.rbd_secret_uuid        = 21f084a3-af34-5230-afe4-ea5cd24a55f4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.727 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.727 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.727 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.728 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.728 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.728 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.728 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.728 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.728 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.729 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.729 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.729 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.729 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.729 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.729 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.729 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.730 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.730 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.730 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.730 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.730 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.730 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.730 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.731 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.731 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.731 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.731 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.731 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.731 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.731 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.732 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.732 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.732 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.732 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.732 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.732 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.732 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.733 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.733 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.733 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.733 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.733 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.733 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.733 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.734 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.734 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.734 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.734 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.734 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.734 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.734 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.735 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.735 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.735 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.735 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.735 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.735 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.735 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.736 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.736 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.736 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.736 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.736 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.736 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.737 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.737 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.737 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.737 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.737 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.737 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.738 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.738 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.738 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.738 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.738 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.738 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.738 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.739 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.739 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.739 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.739 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.739 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.739 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.740 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.740 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.740 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.740 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.740 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.740 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.740 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.741 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.741 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.741 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.741 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.741 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.741 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.741 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.741 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.742 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.742 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.742 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.742 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.742 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.742 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.742 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.743 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.743 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.743 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.743 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.743 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.743 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.743 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.744 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.744 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.744 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.744 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.744 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.744 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.744 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.745 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.745 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.745 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.745 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.745 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.746 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.746 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.746 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.746 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.746 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.746 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.746 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.747 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.747 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.747 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.747 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.747 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.747 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.747 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.748 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.748 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.748 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.748 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.748 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.748 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.748 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.749 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.749 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.749 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.749 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.749 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.749 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.749 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.750 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.750 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.750 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.750 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.750 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.750 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.751 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.751 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.751 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.751 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.751 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.751 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.752 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.752 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.752 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.752 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.752 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.752 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.752 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.752 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.753 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.753 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.753 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.753 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.753 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.753 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.754 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.754 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.754 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.754 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.754 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.754 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.754 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.755 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.755 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.755 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.755 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.755 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.755 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.756 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.756 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.756 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.756 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.756 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.756 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.756 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.757 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.757 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.757 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.757 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.757 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.757 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.758 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.758 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.758 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.758 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.758 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.758 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.758 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.759 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.759 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.759 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.759 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.759 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.759 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.760 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.760 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.760 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.760 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.760 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.760 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.760 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.761 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.761 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.761 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.761 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.761 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.762 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.762 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.762 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.762 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.762 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vnc.server_proxyclient_address = 192.168.122.101 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.762 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.763 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.763 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.763 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.763 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.763 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.763 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.763 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.764 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.764 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.764 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.764 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.764 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.764 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.764 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.765 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.765 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.765 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.765 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.765 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.765 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.765 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.765 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.766 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.766 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.766 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.766 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.766 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.766 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.767 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.767 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.767 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.767 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.767 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.767 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.767 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.768 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.768 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.768 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.768 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.768 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.768 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.769 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.769 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.769 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.769 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.769 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.769 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.769 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.770 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.770 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.770 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.770 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.770 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.770 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.770 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.771 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.771 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.771 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.771 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.771 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.771 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.772 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.772 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.772 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.772 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.772 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.772 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.773 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.773 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.773 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.773 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.773 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.773 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.773 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.774 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.774 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.774 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.774 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.774 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.774 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.775 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.775 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.775 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.775 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.775 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.775 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.775 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.776 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.776 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.776 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.776 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.776 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.777 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.777 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.777 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.777 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.777 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.777 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.777 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.778 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.778 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.778 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.778 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.778 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.778 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.779 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.779 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.779 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.779 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.779 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.779 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.779 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.780 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.780 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.780 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.780 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.780 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.780 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.780 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.781 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.781 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.781 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.781 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.781 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.782 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.782 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.782 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.782 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.782 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.782 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.783 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.783 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.783 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.783 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.783 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.783 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.783 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.784 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.784 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.784 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.784 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.784 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.784 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.785 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.785 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.785 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.785 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.785 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.785 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.785 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.785 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.786 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.786 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.786 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.786 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.786 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.786 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.786 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.787 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.787 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.787 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.787 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.787 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.787 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.788 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.788 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.788 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.788 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.788 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.788 2 DEBUG oslo_service.service [None req-2feeeab3-0aea-4f74-88af-f5b0d69490ff - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.789 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.814 2 INFO nova.virt.node [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Determined node identity c9b2c4a3-cb19-4387-8719-36027e3cdaec from /var/lib/nova/compute_id
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.815 2 DEBUG nova.virt.libvirt.host [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.815 2 DEBUG nova.virt.libvirt.host [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.815 2 DEBUG nova.virt.libvirt.host [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.816 2 DEBUG nova.virt.libvirt.host [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.829 2 DEBUG nova.virt.libvirt.host [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f57c1491b20> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.831 2 DEBUG nova.virt.libvirt.host [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f57c1491b20> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.832 2 INFO nova.virt.libvirt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Connection event '1' reason 'None'
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.838 2 INFO nova.virt.libvirt.host [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Libvirt host capabilities <capabilities>
Oct 10 10:05:45 compute-1 nova_compute[235132]: 
Oct 10 10:05:45 compute-1 nova_compute[235132]:   <host>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <uuid>b3ce5971-8a21-4607-a1ce-4c5a00fcffdd</uuid>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <cpu>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <arch>x86_64</arch>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model>EPYC-Rome-v4</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <vendor>AMD</vendor>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <microcode version='16777317'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <signature family='23' model='49' stepping='0'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <maxphysaddr mode='emulate' bits='40'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature name='x2apic'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature name='tsc-deadline'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature name='osxsave'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature name='hypervisor'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature name='tsc_adjust'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature name='spec-ctrl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature name='stibp'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature name='arch-capabilities'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature name='ssbd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature name='cmp_legacy'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature name='topoext'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature name='virt-ssbd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature name='lbrv'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature name='tsc-scale'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature name='vmcb-clean'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature name='pause-filter'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature name='pfthreshold'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature name='svme-addr-chk'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature name='rdctl-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature name='skip-l1dfl-vmentry'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature name='mds-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature name='pschange-mc-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <pages unit='KiB' size='4'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <pages unit='KiB' size='2048'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <pages unit='KiB' size='1048576'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     </cpu>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <power_management>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <suspend_mem/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     </power_management>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <iommu support='no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <migration_features>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <live/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <uri_transports>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <uri_transport>tcp</uri_transport>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <uri_transport>rdma</uri_transport>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </uri_transports>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     </migration_features>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <topology>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <cells num='1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <cell id='0'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:           <memory unit='KiB'>7864356</memory>
Oct 10 10:05:45 compute-1 nova_compute[235132]:           <pages unit='KiB' size='4'>1966089</pages>
Oct 10 10:05:45 compute-1 nova_compute[235132]:           <pages unit='KiB' size='2048'>0</pages>
Oct 10 10:05:45 compute-1 nova_compute[235132]:           <pages unit='KiB' size='1048576'>0</pages>
Oct 10 10:05:45 compute-1 nova_compute[235132]:           <distances>
Oct 10 10:05:45 compute-1 nova_compute[235132]:             <sibling id='0' value='10'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:           </distances>
Oct 10 10:05:45 compute-1 nova_compute[235132]:           <cpus num='8'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:           </cpus>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         </cell>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </cells>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     </topology>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <cache>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     </cache>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <secmodel>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model>selinux</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <doi>0</doi>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     </secmodel>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <secmodel>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model>dac</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <doi>0</doi>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <baselabel type='kvm'>+107:+107</baselabel>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <baselabel type='qemu'>+107:+107</baselabel>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     </secmodel>
Oct 10 10:05:45 compute-1 nova_compute[235132]:   </host>
Oct 10 10:05:45 compute-1 nova_compute[235132]: 
Oct 10 10:05:45 compute-1 nova_compute[235132]:   <guest>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <os_type>hvm</os_type>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <arch name='i686'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <wordsize>32</wordsize>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <domain type='qemu'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <domain type='kvm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     </arch>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <features>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <pae/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <nonpae/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <acpi default='on' toggle='yes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <apic default='on' toggle='no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <cpuselection/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <deviceboot/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <disksnapshot default='on' toggle='no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <externalSnapshot/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     </features>
Oct 10 10:05:45 compute-1 nova_compute[235132]:   </guest>
Oct 10 10:05:45 compute-1 nova_compute[235132]: 
Oct 10 10:05:45 compute-1 nova_compute[235132]:   <guest>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <os_type>hvm</os_type>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <arch name='x86_64'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <wordsize>64</wordsize>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <domain type='qemu'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <domain type='kvm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     </arch>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <features>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <acpi default='on' toggle='yes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <apic default='on' toggle='no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <cpuselection/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <deviceboot/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <disksnapshot default='on' toggle='no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <externalSnapshot/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     </features>
Oct 10 10:05:45 compute-1 nova_compute[235132]:   </guest>
Oct 10 10:05:45 compute-1 nova_compute[235132]: 
Oct 10 10:05:45 compute-1 nova_compute[235132]: </capabilities>
Oct 10 10:05:45 compute-1 nova_compute[235132]: 
Oct 10 10:05:45 compute-1 sudo[235436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.847 2 DEBUG nova.virt.libvirt.host [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 10 10:05:45 compute-1 sudo[235436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.853 2 DEBUG nova.virt.libvirt.host [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 10 10:05:45 compute-1 nova_compute[235132]: <domainCapabilities>
Oct 10 10:05:45 compute-1 nova_compute[235132]:   <path>/usr/libexec/qemu-kvm</path>
Oct 10 10:05:45 compute-1 nova_compute[235132]:   <domain>kvm</domain>
Oct 10 10:05:45 compute-1 nova_compute[235132]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 10 10:05:45 compute-1 nova_compute[235132]:   <arch>i686</arch>
Oct 10 10:05:45 compute-1 nova_compute[235132]:   <vcpu max='240'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:   <iothreads supported='yes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:   <os supported='yes'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <enum name='firmware'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <loader supported='yes'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <enum name='type'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>rom</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>pflash</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <enum name='readonly'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>yes</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>no</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <enum name='secure'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>no</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     </loader>
Oct 10 10:05:45 compute-1 nova_compute[235132]:   </os>
Oct 10 10:05:45 compute-1 nova_compute[235132]:   <cpu>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <mode name='host-passthrough' supported='yes'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <enum name='hostPassthroughMigratable'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>on</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>off</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     </mode>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <mode name='maximum' supported='yes'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <enum name='maximumMigratable'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>on</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>off</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     </mode>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <mode name='host-model' supported='yes'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <vendor>AMD</vendor>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='x2apic'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='tsc-deadline'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='hypervisor'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='tsc_adjust'/>
Oct 10 10:05:45 compute-1 sudo[235436]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='spec-ctrl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='stibp'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='arch-capabilities'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='ssbd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='cmp_legacy'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='overflow-recov'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='succor'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='ibrs'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='amd-ssbd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='virt-ssbd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='lbrv'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='tsc-scale'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='vmcb-clean'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='flushbyasid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='pause-filter'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='pfthreshold'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='svme-addr-chk'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='rdctl-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='mds-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='pschange-mc-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='gds-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='rfds-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='disable' name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     </mode>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <mode name='custom' supported='yes'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Broadwell'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Broadwell-IBRS'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Broadwell-noTSX'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Broadwell-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Broadwell-v2'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Broadwell-v3'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Broadwell-v4'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Cascadelake-Server'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Cascadelake-Server-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Cascadelake-Server-v2'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Cascadelake-Server-v3'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Cascadelake-Server-v4'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Cascadelake-Server-v5'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Cooperlake'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Cooperlake-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Cooperlake-v2'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Denverton'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='mpx'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Denverton-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='mpx'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Denverton-v2'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Denverton-v3'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Dhyana-v2'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='EPYC-Genoa'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amd-psfd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='auto-ibrs'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='stibp-always-on'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='EPYC-Genoa-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amd-psfd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='auto-ibrs'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='stibp-always-on'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='EPYC-Milan'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='EPYC-Milan-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='EPYC-Milan-v2'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amd-psfd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='stibp-always-on'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='EPYC-Rome'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='EPYC-Rome-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='EPYC-Rome-v2'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='EPYC-Rome-v3'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='EPYC-v3'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='EPYC-v4'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='GraniteRapids'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-fp16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-int8'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-tile'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-fp16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fbsdp-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrc'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fzrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='mcdt-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pbrsb-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='prefetchiti'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='psdp-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xfd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='GraniteRapids-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-fp16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-int8'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-tile'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-fp16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fbsdp-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrc'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fzrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='mcdt-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pbrsb-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='prefetchiti'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='psdp-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xfd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='GraniteRapids-v2'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-fp16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-int8'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-tile'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx10'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx10-128'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx10-256'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx10-512'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-fp16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='cldemote'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fbsdp-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrc'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fzrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='mcdt-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='movdir64b'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='movdiri'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pbrsb-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='prefetchiti'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='psdp-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xfd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Haswell'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Haswell-IBRS'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Haswell-noTSX'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Haswell-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Haswell-v2'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Haswell-v3'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Haswell-v4'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server-noTSX'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server-v2'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server-v3'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server-v4'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server-v5'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server-v6'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server-v7'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='IvyBridge'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='IvyBridge-IBRS'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='IvyBridge-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='IvyBridge-v2'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='KnightsMill'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-4fmaps'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-4vnniw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512er'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512pf'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='KnightsMill-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-4fmaps'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-4vnniw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512er'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512pf'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Opteron_G4'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fma4'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xop'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Opteron_G4-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fma4'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xop'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Opteron_G5'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fma4'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='tbm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xop'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Opteron_G5-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fma4'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='tbm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xop'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='SapphireRapids'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-int8'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-tile'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-fp16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrc'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fzrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xfd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='SapphireRapids-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-int8'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-tile'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-fp16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrc'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fzrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xfd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='SapphireRapids-v2'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-int8'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-tile'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-fp16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fbsdp-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrc'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fzrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='psdp-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xfd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='SapphireRapids-v3'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-int8'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-tile'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-fp16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='cldemote'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fbsdp-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrc'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fzrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='movdir64b'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='movdiri'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='psdp-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xfd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='SierraForest'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx-ifma'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx-ne-convert'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx-vnni-int8'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='cmpccxadd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fbsdp-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='mcdt-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pbrsb-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='psdp-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='SierraForest-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx-ifma'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx-ne-convert'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx-vnni-int8'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='cmpccxadd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fbsdp-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='mcdt-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pbrsb-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='psdp-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Skylake-Client'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Skylake-Client-IBRS'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Skylake-Client-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Skylake-Client-v2'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Skylake-Client-v3'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Skylake-Client-v4'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Skylake-Server'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Skylake-Server-IBRS'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Skylake-Server-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Skylake-Server-v2'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Skylake-Server-v3'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Skylake-Server-v4'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Skylake-Server-v5'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Snowridge'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='cldemote'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='core-capability'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='movdir64b'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='movdiri'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='mpx'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='split-lock-detect'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Snowridge-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='cldemote'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='core-capability'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='movdir64b'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='movdiri'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='mpx'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='split-lock-detect'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Snowridge-v2'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='cldemote'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='core-capability'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='movdir64b'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='movdiri'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='split-lock-detect'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Snowridge-v3'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='cldemote'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='core-capability'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='movdir64b'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='movdiri'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='split-lock-detect'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Snowridge-v4'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='cldemote'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='movdir64b'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='movdiri'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='athlon'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='3dnow'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='3dnowext'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='athlon-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='3dnow'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='3dnowext'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='core2duo'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='core2duo-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='coreduo'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='coreduo-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='n270'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='n270-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='phenom'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='3dnow'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='3dnowext'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='phenom-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='3dnow'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='3dnowext'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     </mode>
Oct 10 10:05:45 compute-1 nova_compute[235132]:   </cpu>
Oct 10 10:05:45 compute-1 nova_compute[235132]:   <memoryBacking supported='yes'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <enum name='sourceType'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <value>file</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <value>anonymous</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <value>memfd</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     </enum>
Oct 10 10:05:45 compute-1 nova_compute[235132]:   </memoryBacking>
Oct 10 10:05:45 compute-1 nova_compute[235132]:   <devices>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <disk supported='yes'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <enum name='diskDevice'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>disk</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>cdrom</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>floppy</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>lun</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <enum name='bus'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>ide</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>fdc</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>scsi</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>virtio</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>usb</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>sata</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <enum name='model'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>virtio</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>virtio-transitional</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>virtio-non-transitional</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     </disk>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <graphics supported='yes'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <enum name='type'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>vnc</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>egl-headless</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>dbus</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     </graphics>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <video supported='yes'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <enum name='modelType'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>vga</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>cirrus</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>virtio</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>none</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>bochs</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>ramfb</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     </video>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <hostdev supported='yes'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <enum name='mode'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>subsystem</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <enum name='startupPolicy'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>default</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>mandatory</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>requisite</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>optional</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <enum name='subsysType'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>usb</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>pci</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>scsi</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <enum name='capsType'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <enum name='pciBackend'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     </hostdev>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <rng supported='yes'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <enum name='model'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>virtio</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>virtio-transitional</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>virtio-non-transitional</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <enum name='backendModel'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>random</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>egd</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>builtin</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     </rng>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <filesystem supported='yes'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <enum name='driverType'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>path</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>handle</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>virtiofs</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     </filesystem>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <tpm supported='yes'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <enum name='model'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>tpm-tis</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>tpm-crb</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <enum name='backendModel'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>emulator</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>external</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <enum name='backendVersion'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>2.0</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     </tpm>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <redirdev supported='yes'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <enum name='bus'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>usb</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     </redirdev>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <channel supported='yes'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <enum name='type'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>pty</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>unix</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     </channel>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <crypto supported='yes'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <enum name='model'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <enum name='type'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>qemu</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <enum name='backendModel'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>builtin</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     </crypto>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <interface supported='yes'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <enum name='backendType'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>default</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>passt</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     </interface>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <panic supported='yes'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <enum name='model'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>isa</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>hyperv</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     </panic>
Oct 10 10:05:45 compute-1 nova_compute[235132]:   </devices>
Oct 10 10:05:45 compute-1 nova_compute[235132]:   <features>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <gic supported='no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <vmcoreinfo supported='yes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <genid supported='yes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <backingStoreInput supported='yes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <backup supported='yes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <async-teardown supported='yes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <ps2 supported='yes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <sev supported='no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <sgx supported='no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <hyperv supported='yes'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <enum name='features'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>relaxed</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>vapic</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>spinlocks</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>vpindex</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>runtime</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>synic</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>stimer</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>reset</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>vendor_id</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>frequencies</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>reenlightenment</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>tlbflush</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>ipi</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>avic</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>emsr_bitmap</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>xmm_input</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     </hyperv>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <launchSecurity supported='no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:   </features>
Oct 10 10:05:45 compute-1 nova_compute[235132]: </domainCapabilities>
Oct 10 10:05:45 compute-1 nova_compute[235132]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 10 10:05:45 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.861 2 DEBUG nova.virt.libvirt.host [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 10 10:05:45 compute-1 nova_compute[235132]: <domainCapabilities>
Oct 10 10:05:45 compute-1 nova_compute[235132]:   <path>/usr/libexec/qemu-kvm</path>
Oct 10 10:05:45 compute-1 nova_compute[235132]:   <domain>kvm</domain>
Oct 10 10:05:45 compute-1 nova_compute[235132]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 10 10:05:45 compute-1 nova_compute[235132]:   <arch>i686</arch>
Oct 10 10:05:45 compute-1 nova_compute[235132]:   <vcpu max='4096'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:   <iothreads supported='yes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:   <os supported='yes'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <enum name='firmware'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <loader supported='yes'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <enum name='type'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>rom</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>pflash</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <enum name='readonly'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>yes</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>no</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <enum name='secure'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>no</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     </loader>
Oct 10 10:05:45 compute-1 nova_compute[235132]:   </os>
Oct 10 10:05:45 compute-1 nova_compute[235132]:   <cpu>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <mode name='host-passthrough' supported='yes'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <enum name='hostPassthroughMigratable'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>on</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>off</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     </mode>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <mode name='maximum' supported='yes'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <enum name='maximumMigratable'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>on</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <value>off</value>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     </mode>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <mode name='host-model' supported='yes'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <vendor>AMD</vendor>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='x2apic'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='tsc-deadline'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='hypervisor'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='tsc_adjust'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='spec-ctrl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='stibp'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='arch-capabilities'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='ssbd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='cmp_legacy'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='overflow-recov'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='succor'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='ibrs'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='amd-ssbd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='virt-ssbd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='lbrv'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='tsc-scale'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='vmcb-clean'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='flushbyasid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='pause-filter'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='pfthreshold'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='svme-addr-chk'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='rdctl-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='mds-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='pschange-mc-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='gds-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='require' name='rfds-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <feature policy='disable' name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     </mode>
Oct 10 10:05:45 compute-1 nova_compute[235132]:     <mode name='custom' supported='yes'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Broadwell'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Broadwell-IBRS'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Broadwell-noTSX'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Broadwell-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Broadwell-v2'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Broadwell-v3'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Broadwell-v4'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Cascadelake-Server'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Cascadelake-Server-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Cascadelake-Server-v2'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Cascadelake-Server-v3'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Cascadelake-Server-v4'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Cascadelake-Server-v5'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Cooperlake'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Cooperlake-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Cooperlake-v2'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Denverton'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='mpx'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Denverton-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='mpx'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Denverton-v2'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Denverton-v3'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Dhyana-v2'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='EPYC-Genoa'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amd-psfd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='auto-ibrs'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='stibp-always-on'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='EPYC-Genoa-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amd-psfd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='auto-ibrs'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='stibp-always-on'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='EPYC-Milan'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='EPYC-Milan-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='EPYC-Milan-v2'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amd-psfd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='stibp-always-on'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='EPYC-Rome'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='EPYC-Rome-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='EPYC-Rome-v2'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='EPYC-Rome-v3'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='EPYC-v3'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='EPYC-v4'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='GraniteRapids'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-fp16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-int8'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-tile'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-fp16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fbsdp-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrc'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fzrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='mcdt-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pbrsb-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='prefetchiti'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='psdp-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xfd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='GraniteRapids-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-fp16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-int8'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-tile'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-fp16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fbsdp-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrc'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fzrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='mcdt-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pbrsb-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='prefetchiti'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='psdp-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xfd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='GraniteRapids-v2'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-fp16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-int8'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-tile'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx10'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx10-128'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx10-256'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx10-512'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-fp16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='cldemote'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fbsdp-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrc'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fzrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='mcdt-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='movdir64b'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='movdiri'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pbrsb-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='prefetchiti'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='psdp-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xfd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Haswell'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Haswell-IBRS'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Haswell-noTSX'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Haswell-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Haswell-v2'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Haswell-v3'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Haswell-v4'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server-noTSX'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server-v2'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server-v3'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server-v4'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server-v5'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server-v6'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server-v7'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='IvyBridge'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='IvyBridge-IBRS'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='IvyBridge-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='IvyBridge-v2'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='KnightsMill'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-4fmaps'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-4vnniw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512er'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512pf'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='KnightsMill-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-4fmaps'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-4vnniw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512er'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512pf'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Opteron_G4'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fma4'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xop'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Opteron_G4-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fma4'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xop'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Opteron_G5'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fma4'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='tbm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xop'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='Opteron_G5-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fma4'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='tbm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xop'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='SapphireRapids'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-int8'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-tile'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-fp16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrc'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fzrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xfd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='SapphireRapids-v1'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-int8'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-tile'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-fp16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrc'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fzrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xfd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='SapphireRapids-v2'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-int8'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-tile'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-fp16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fbsdp-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrc'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fzrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='psdp-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xfd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 10 10:05:45 compute-1 nova_compute[235132]:       <blockers model='SapphireRapids-v3'>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-int8'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='amx-tile'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-fp16'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='cldemote'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fbsdp-no'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrc'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='fzrm'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='movdir64b'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='movdiri'/>
Oct 10 10:05:45 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='SierraForest'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-ifma'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-ne-convert'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-vnni-int8'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='cmpccxadd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='mcdt-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pbrsb-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='SierraForest-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-ifma'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-ne-convert'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-vnni-int8'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='cmpccxadd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='mcdt-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pbrsb-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Client'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Client-IBRS'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Client-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Client-v2'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Client-v3'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Client-v4'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Server'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Server-IBRS'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Server-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Server-v2'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Server-v3'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Server-v4'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Server-v5'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Snowridge'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='core-capability'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='mpx'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='split-lock-detect'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Snowridge-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='core-capability'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='mpx'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='split-lock-detect'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Snowridge-v2'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='core-capability'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='split-lock-detect'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Snowridge-v3'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='core-capability'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='split-lock-detect'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Snowridge-v4'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='athlon'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='3dnow'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='3dnowext'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='athlon-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='3dnow'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='3dnowext'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='core2duo'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='core2duo-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='coreduo'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='coreduo-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='n270'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='n270-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='phenom'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='3dnow'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='3dnowext'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='phenom-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='3dnow'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='3dnowext'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </mode>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   </cpu>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   <memoryBacking supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <enum name='sourceType'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <value>file</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <value>anonymous</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <value>memfd</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   </memoryBacking>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   <devices>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <disk supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='diskDevice'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>disk</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>cdrom</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>floppy</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>lun</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='bus'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>fdc</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>scsi</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>virtio</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>usb</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>sata</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='model'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>virtio</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>virtio-transitional</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>virtio-non-transitional</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </disk>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <graphics supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='type'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>vnc</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>egl-headless</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>dbus</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </graphics>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <video supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='modelType'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>vga</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>cirrus</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>virtio</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>none</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>bochs</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>ramfb</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </video>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <hostdev supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='mode'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>subsystem</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='startupPolicy'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>default</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>mandatory</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>requisite</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>optional</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='subsysType'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>usb</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>pci</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>scsi</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='capsType'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='pciBackend'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </hostdev>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <rng supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='model'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>virtio</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>virtio-transitional</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>virtio-non-transitional</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='backendModel'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>random</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>egd</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>builtin</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </rng>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <filesystem supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='driverType'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>path</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>handle</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>virtiofs</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </filesystem>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <tpm supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='model'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>tpm-tis</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>tpm-crb</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='backendModel'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>emulator</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>external</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='backendVersion'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>2.0</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </tpm>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <redirdev supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='bus'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>usb</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </redirdev>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <channel supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='type'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>pty</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>unix</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </channel>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <crypto supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='model'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='type'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>qemu</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='backendModel'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>builtin</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </crypto>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <interface supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='backendType'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>default</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>passt</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </interface>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <panic supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='model'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>isa</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>hyperv</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </panic>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   </devices>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   <features>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <gic supported='no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <vmcoreinfo supported='yes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <genid supported='yes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <backingStoreInput supported='yes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <backup supported='yes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <async-teardown supported='yes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <ps2 supported='yes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <sev supported='no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <sgx supported='no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <hyperv supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='features'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>relaxed</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>vapic</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>spinlocks</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>vpindex</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>runtime</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>synic</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>stimer</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>reset</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>vendor_id</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>frequencies</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>reenlightenment</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>tlbflush</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>ipi</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>avic</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>emsr_bitmap</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>xmm_input</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </hyperv>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <launchSecurity supported='no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   </features>
Oct 10 10:05:46 compute-1 nova_compute[235132]: </domainCapabilities>
Oct 10 10:05:46 compute-1 nova_compute[235132]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 10 10:05:46 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.912 2 DEBUG nova.virt.libvirt.host [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 10 10:05:46 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.913 2 DEBUG nova.virt.libvirt.volume.mount [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Oct 10 10:05:46 compute-1 nova_compute[235132]: 2025-10-10 10:05:45.919 2 DEBUG nova.virt.libvirt.host [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 10 10:05:46 compute-1 nova_compute[235132]: <domainCapabilities>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   <path>/usr/libexec/qemu-kvm</path>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   <domain>kvm</domain>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   <arch>x86_64</arch>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   <vcpu max='240'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   <iothreads supported='yes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   <os supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <enum name='firmware'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <loader supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='type'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>rom</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>pflash</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='readonly'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>yes</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>no</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='secure'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>no</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </loader>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   </os>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   <cpu>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <mode name='host-passthrough' supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='hostPassthroughMigratable'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>on</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>off</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </mode>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <mode name='maximum' supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='maximumMigratable'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>on</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>off</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </mode>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <mode name='host-model' supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <vendor>AMD</vendor>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='x2apic'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='tsc-deadline'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='hypervisor'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='tsc_adjust'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='spec-ctrl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='stibp'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='arch-capabilities'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='ssbd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='cmp_legacy'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='overflow-recov'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='succor'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='ibrs'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='amd-ssbd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='virt-ssbd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='lbrv'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='tsc-scale'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='vmcb-clean'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='flushbyasid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='pause-filter'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='pfthreshold'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='svme-addr-chk'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='rdctl-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='mds-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='pschange-mc-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='gds-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='rfds-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='disable' name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </mode>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <mode name='custom' supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Broadwell'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Broadwell-IBRS'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Broadwell-noTSX'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Broadwell-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Broadwell-v2'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Broadwell-v3'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Broadwell-v4'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Cascadelake-Server'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Cascadelake-Server-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Cascadelake-Server-v2'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Cascadelake-Server-v3'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Cascadelake-Server-v4'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Cascadelake-Server-v5'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Cooperlake'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Cooperlake-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Cooperlake-v2'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Denverton'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='mpx'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Denverton-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='mpx'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Denverton-v2'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Denverton-v3'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Dhyana-v2'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='EPYC-Genoa'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amd-psfd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='auto-ibrs'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='stibp-always-on'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='EPYC-Genoa-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amd-psfd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='auto-ibrs'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='stibp-always-on'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='EPYC-Milan'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='EPYC-Milan-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='EPYC-Milan-v2'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amd-psfd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='stibp-always-on'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='EPYC-Rome'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='EPYC-Rome-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='EPYC-Rome-v2'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='EPYC-Rome-v3'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='EPYC-v3'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='EPYC-v4'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='GraniteRapids'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-fp16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='mcdt-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pbrsb-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='prefetchiti'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='GraniteRapids-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-fp16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='mcdt-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pbrsb-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='prefetchiti'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='GraniteRapids-v2'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-fp16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx10'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx10-128'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx10-256'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx10-512'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='mcdt-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pbrsb-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='prefetchiti'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Haswell'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Haswell-IBRS'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Haswell-noTSX'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Haswell-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Haswell-v2'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Haswell-v3'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Haswell-v4'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server-noTSX'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server-v2'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server-v3'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server-v4'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server-v5'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server-v6'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server-v7'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 ceph-mon[79167]: pgmap v586: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='IvyBridge'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='IvyBridge-IBRS'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='IvyBridge-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='IvyBridge-v2'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='KnightsMill'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-4fmaps'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-4vnniw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512er'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512pf'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='KnightsMill-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-4fmaps'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-4vnniw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512er'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512pf'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Opteron_G4'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fma4'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xop'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Opteron_G4-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fma4'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xop'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Opteron_G5'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fma4'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='tbm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xop'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Opteron_G5-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fma4'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='tbm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xop'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='SapphireRapids'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='SapphireRapids-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='SapphireRapids-v2'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='SapphireRapids-v3'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='SierraForest'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-ifma'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-ne-convert'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-vnni-int8'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='cmpccxadd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='mcdt-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pbrsb-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='SierraForest-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-ifma'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-ne-convert'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-vnni-int8'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='cmpccxadd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='mcdt-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pbrsb-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Client'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Client-IBRS'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Client-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Client-v2'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Client-v3'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Client-v4'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Server'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Server-IBRS'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Server-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Server-v2'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Server-v3'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Server-v4'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Server-v5'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Snowridge'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='core-capability'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='mpx'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='split-lock-detect'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Snowridge-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='core-capability'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='mpx'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='split-lock-detect'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Snowridge-v2'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='core-capability'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='split-lock-detect'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Snowridge-v3'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='core-capability'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='split-lock-detect'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Snowridge-v4'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='athlon'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='3dnow'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='3dnowext'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='athlon-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='3dnow'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='3dnowext'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='core2duo'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='core2duo-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='coreduo'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='coreduo-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='n270'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='n270-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='phenom'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='3dnow'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='3dnowext'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='phenom-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='3dnow'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='3dnowext'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </mode>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   </cpu>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   <memoryBacking supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <enum name='sourceType'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <value>file</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <value>anonymous</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <value>memfd</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   </memoryBacking>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   <devices>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <disk supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='diskDevice'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>disk</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>cdrom</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>floppy</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>lun</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='bus'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>ide</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>fdc</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>scsi</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>virtio</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>usb</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>sata</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='model'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>virtio</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>virtio-transitional</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>virtio-non-transitional</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </disk>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <graphics supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='type'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>vnc</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>egl-headless</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>dbus</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </graphics>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <video supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='modelType'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>vga</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>cirrus</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>virtio</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>none</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>bochs</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>ramfb</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </video>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <hostdev supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='mode'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>subsystem</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='startupPolicy'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>default</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>mandatory</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>requisite</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>optional</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='subsysType'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>usb</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>pci</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>scsi</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='capsType'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='pciBackend'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </hostdev>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <rng supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='model'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>virtio</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>virtio-transitional</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>virtio-non-transitional</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='backendModel'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>random</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>egd</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>builtin</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </rng>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <filesystem supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='driverType'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>path</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>handle</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>virtiofs</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </filesystem>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <tpm supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='model'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>tpm-tis</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>tpm-crb</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='backendModel'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>emulator</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>external</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='backendVersion'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>2.0</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </tpm>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <redirdev supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='bus'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>usb</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </redirdev>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <channel supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='type'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>pty</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>unix</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </channel>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <crypto supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='model'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='type'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>qemu</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='backendModel'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>builtin</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </crypto>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <interface supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='backendType'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>default</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>passt</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </interface>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <panic supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='model'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>isa</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>hyperv</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </panic>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   </devices>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   <features>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <gic supported='no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <vmcoreinfo supported='yes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <genid supported='yes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <backingStoreInput supported='yes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <backup supported='yes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <async-teardown supported='yes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <ps2 supported='yes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <sev supported='no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <sgx supported='no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <hyperv supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='features'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>relaxed</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>vapic</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>spinlocks</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>vpindex</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>runtime</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>synic</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>stimer</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>reset</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>vendor_id</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>frequencies</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>reenlightenment</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>tlbflush</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>ipi</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>avic</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>emsr_bitmap</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>xmm_input</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </hyperv>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <launchSecurity supported='no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   </features>
Oct 10 10:05:46 compute-1 nova_compute[235132]: </domainCapabilities>
Oct 10 10:05:46 compute-1 nova_compute[235132]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 10 10:05:46 compute-1 nova_compute[235132]: 2025-10-10 10:05:46.014 2 DEBUG nova.virt.libvirt.host [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 10 10:05:46 compute-1 nova_compute[235132]: <domainCapabilities>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   <path>/usr/libexec/qemu-kvm</path>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   <domain>kvm</domain>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   <arch>x86_64</arch>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   <vcpu max='4096'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   <iothreads supported='yes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   <os supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <enum name='firmware'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <value>efi</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <loader supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='type'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>rom</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>pflash</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='readonly'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>yes</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>no</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='secure'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>yes</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>no</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </loader>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   </os>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   <cpu>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <mode name='host-passthrough' supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='hostPassthroughMigratable'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>on</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>off</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </mode>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <mode name='maximum' supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='maximumMigratable'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>on</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>off</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </mode>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <mode name='host-model' supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model fallback='forbid'>EPYC-Rome</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <vendor>AMD</vendor>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <maxphysaddr mode='passthrough' limit='40'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='x2apic'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='tsc-deadline'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='hypervisor'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='tsc_adjust'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='spec-ctrl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='stibp'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='arch-capabilities'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='ssbd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='cmp_legacy'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='overflow-recov'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='succor'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='ibrs'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='amd-ssbd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='virt-ssbd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='lbrv'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='tsc-scale'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='vmcb-clean'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='flushbyasid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='pause-filter'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='pfthreshold'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='svme-addr-chk'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='rdctl-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='mds-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='pschange-mc-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='gds-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='require' name='rfds-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <feature policy='disable' name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </mode>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <mode name='custom' supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Broadwell'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Broadwell-IBRS'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Broadwell-noTSX'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Broadwell-noTSX-IBRS'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Broadwell-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Broadwell-v2'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Broadwell-v3'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Broadwell-v4'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Cascadelake-Server'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Cascadelake-Server-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Cascadelake-Server-v2'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Cascadelake-Server-v3'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Cascadelake-Server-v4'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Cascadelake-Server-v5'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Cooperlake'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Cooperlake-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Cooperlake-v2'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Denverton'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='mpx'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Denverton-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='mpx'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Denverton-v2'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Denverton-v3'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Dhyana-v2'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='EPYC-Genoa'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amd-psfd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='auto-ibrs'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='stibp-always-on'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='EPYC-Genoa-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amd-psfd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='auto-ibrs'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='stibp-always-on'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='EPYC-Milan'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='EPYC-Milan-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='EPYC-Milan-v2'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amd-psfd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='no-nested-data-bp'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='null-sel-clr-base'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='stibp-always-on'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='EPYC-Rome'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='EPYC-Rome-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='EPYC-Rome-v2'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='EPYC-Rome-v3'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='EPYC-v3'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='EPYC-v4'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='GraniteRapids'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-fp16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='mcdt-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pbrsb-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='prefetchiti'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='GraniteRapids-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-fp16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='mcdt-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pbrsb-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='prefetchiti'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='GraniteRapids-v2'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-fp16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx10'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx10-128'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx10-256'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx10-512'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='mcdt-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pbrsb-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='prefetchiti'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Haswell'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Haswell-IBRS'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Haswell-noTSX'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Haswell-noTSX-IBRS'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Haswell-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Haswell-v2'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Haswell-v3'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Haswell-v4'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server-noTSX'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server-v2'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server-v3'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server-v4'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server-v5'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server-v6'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Icelake-Server-v7'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='IvyBridge'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='IvyBridge-IBRS'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='IvyBridge-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='IvyBridge-v2'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='KnightsMill'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-4fmaps'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-4vnniw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512er'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512pf'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='KnightsMill-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-4fmaps'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-4vnniw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512er'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512pf'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Opteron_G4'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fma4'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xop'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Opteron_G4-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fma4'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xop'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Opteron_G5'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fma4'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='tbm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xop'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Opteron_G5-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fma4'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='tbm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xop'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='SapphireRapids'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='SapphireRapids-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='SapphireRapids-v2'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='SapphireRapids-v3'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-int8'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='amx-tile'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-bf16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-fp16'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512-vpopcntdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bitalg'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512ifma'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vbmi2'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrc'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fzrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='la57'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='taa-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='tsx-ldtrk'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xfd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='SierraForest'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-ifma'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-ne-convert'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-vnni-int8'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='cmpccxadd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='mcdt-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pbrsb-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='SierraForest-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-ifma'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-ne-convert'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-vnni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx-vnni-int8'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='bus-lock-detect'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='cmpccxadd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fbsdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='fsrs'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ibrs-all'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='mcdt-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pbrsb-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='psdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='sbdr-ssdp-no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='serialize'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vaes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='vpclmulqdq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Client'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Client-IBRS'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Client-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Client-v2'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Client-v3'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Client-v4'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Server'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Server-IBRS'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Server-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Server-v2'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='hle'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='rtm'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Server-v3'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Server-v4'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Skylake-Server-v5'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512bw'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512cd'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512dq'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512f'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='avx512vl'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='invpcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pcid'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='pku'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Snowridge'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='core-capability'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='mpx'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='split-lock-detect'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Snowridge-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='core-capability'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='mpx'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='split-lock-detect'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Snowridge-v2'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='core-capability'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='split-lock-detect'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Snowridge-v3'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='core-capability'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='split-lock-detect'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='Snowridge-v4'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='cldemote'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='erms'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='gfni'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdir64b'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='movdiri'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='xsaves'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='athlon'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='3dnow'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='3dnowext'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='athlon-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='3dnow'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='3dnowext'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='core2duo'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='core2duo-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='coreduo'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='coreduo-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='n270'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='n270-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='ss'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='phenom'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='3dnow'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='3dnowext'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <blockers model='phenom-v1'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='3dnow'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <feature name='3dnowext'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </blockers>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </mode>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   </cpu>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   <memoryBacking supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <enum name='sourceType'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <value>file</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <value>anonymous</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <value>memfd</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   </memoryBacking>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   <devices>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <disk supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='diskDevice'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>disk</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>cdrom</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>floppy</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>lun</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='bus'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>fdc</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>scsi</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>virtio</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>usb</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>sata</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='model'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>virtio</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>virtio-transitional</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>virtio-non-transitional</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </disk>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <graphics supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='type'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>vnc</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>egl-headless</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>dbus</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </graphics>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <video supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='modelType'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>vga</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>cirrus</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>virtio</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>none</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>bochs</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>ramfb</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </video>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <hostdev supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='mode'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>subsystem</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='startupPolicy'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>default</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>mandatory</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>requisite</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>optional</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='subsysType'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>usb</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>pci</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>scsi</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='capsType'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='pciBackend'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </hostdev>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <rng supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='model'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>virtio</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>virtio-transitional</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>virtio-non-transitional</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='backendModel'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>random</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>egd</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>builtin</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </rng>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <filesystem supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='driverType'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>path</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>handle</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>virtiofs</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </filesystem>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <tpm supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='model'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>tpm-tis</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>tpm-crb</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='backendModel'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>emulator</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>external</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='backendVersion'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>2.0</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </tpm>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <redirdev supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='bus'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>usb</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </redirdev>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <channel supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='type'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>pty</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>unix</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </channel>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <crypto supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='model'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='type'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>qemu</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='backendModel'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>builtin</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </crypto>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <interface supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='backendType'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>default</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>passt</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </interface>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <panic supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='model'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>isa</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>hyperv</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </panic>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   </devices>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   <features>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <gic supported='no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <vmcoreinfo supported='yes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <genid supported='yes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <backingStoreInput supported='yes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <backup supported='yes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <async-teardown supported='yes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <ps2 supported='yes'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <sev supported='no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <sgx supported='no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <hyperv supported='yes'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       <enum name='features'>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>relaxed</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>vapic</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>spinlocks</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>vpindex</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>runtime</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>synic</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>stimer</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>reset</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>vendor_id</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>frequencies</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>reenlightenment</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>tlbflush</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>ipi</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>avic</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>emsr_bitmap</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:         <value>xmm_input</value>
Oct 10 10:05:46 compute-1 nova_compute[235132]:       </enum>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     </hyperv>
Oct 10 10:05:46 compute-1 nova_compute[235132]:     <launchSecurity supported='no'/>
Oct 10 10:05:46 compute-1 nova_compute[235132]:   </features>
Oct 10 10:05:46 compute-1 nova_compute[235132]: </domainCapabilities>
Oct 10 10:05:46 compute-1 nova_compute[235132]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 10 10:05:46 compute-1 nova_compute[235132]: 2025-10-10 10:05:46.071 2 DEBUG nova.virt.libvirt.host [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 10 10:05:46 compute-1 nova_compute[235132]: 2025-10-10 10:05:46.071 2 DEBUG nova.virt.libvirt.host [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 10 10:05:46 compute-1 nova_compute[235132]: 2025-10-10 10:05:46.072 2 DEBUG nova.virt.libvirt.host [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 10 10:05:46 compute-1 nova_compute[235132]: 2025-10-10 10:05:46.072 2 INFO nova.virt.libvirt.host [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Secure Boot support detected
Oct 10 10:05:46 compute-1 nova_compute[235132]: 2025-10-10 10:05:46.074 2 INFO nova.virt.libvirt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 10 10:05:46 compute-1 nova_compute[235132]: 2025-10-10 10:05:46.074 2 INFO nova.virt.libvirt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 10 10:05:46 compute-1 nova_compute[235132]: 2025-10-10 10:05:46.083 2 DEBUG nova.virt.libvirt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Oct 10 10:05:46 compute-1 nova_compute[235132]: 2025-10-10 10:05:46.105 2 INFO nova.virt.node [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Determined node identity c9b2c4a3-cb19-4387-8719-36027e3cdaec from /var/lib/nova/compute_id
Oct 10 10:05:46 compute-1 nova_compute[235132]: 2025-10-10 10:05:46.120 2 WARNING nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Compute nodes ['c9b2c4a3-cb19-4387-8719-36027e3cdaec'] for host compute-1.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Oct 10 10:05:46 compute-1 nova_compute[235132]: 2025-10-10 10:05:46.143 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Oct 10 10:05:46 compute-1 nova_compute[235132]: 2025-10-10 10:05:46.155 2 WARNING nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] No compute node record found for host compute-1.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-1.ctlplane.example.com could not be found.
Oct 10 10:05:46 compute-1 nova_compute[235132]: 2025-10-10 10:05:46.156 2 DEBUG oslo_concurrency.lockutils [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:05:46 compute-1 nova_compute[235132]: 2025-10-10 10:05:46.156 2 DEBUG oslo_concurrency.lockutils [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:05:46 compute-1 nova_compute[235132]: 2025-10-10 10:05:46.156 2 DEBUG oslo_concurrency.lockutils [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:05:46 compute-1 nova_compute[235132]: 2025-10-10 10:05:46.156 2 DEBUG nova.compute.resource_tracker [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:05:46 compute-1 nova_compute[235132]: 2025-10-10 10:05:46.157 2 DEBUG oslo_concurrency.processutils [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:05:46 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:46 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:46 compute-1 rsyslogd[1005]: imjournal from <np0005479822:nova_compute>: begin to drop messages due to rate-limiting
Oct 10 10:05:46 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:05:46 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1757370242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:05:46 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:46 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:46 compute-1 nova_compute[235132]: 2025-10-10 10:05:46.640 2 DEBUG oslo_concurrency.processutils [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:05:46 compute-1 nova_compute[235132]: 2025-10-10 10:05:46.832 2 WARNING nova.virt.libvirt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:05:46 compute-1 nova_compute[235132]: 2025-10-10 10:05:46.833 2 DEBUG nova.compute.resource_tracker [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5279MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:05:46 compute-1 nova_compute[235132]: 2025-10-10 10:05:46.833 2 DEBUG oslo_concurrency.lockutils [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:05:46 compute-1 nova_compute[235132]: 2025-10-10 10:05:46.833 2 DEBUG oslo_concurrency.lockutils [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:05:46 compute-1 nova_compute[235132]: 2025-10-10 10:05:46.892 2 WARNING nova.compute.resource_tracker [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] No compute node record for compute-1.ctlplane.example.com:c9b2c4a3-cb19-4387-8719-36027e3cdaec: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host c9b2c4a3-cb19-4387-8719-36027e3cdaec could not be found.
Oct 10 10:05:46 compute-1 nova_compute[235132]: 2025-10-10 10:05:46.915 2 INFO nova.compute.resource_tracker [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Compute node record created for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com with uuid: c9b2c4a3-cb19-4387-8719-36027e3cdaec
Oct 10 10:05:46 compute-1 nova_compute[235132]: 2025-10-10 10:05:46.976 2 DEBUG nova.compute.resource_tracker [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:05:46 compute-1 nova_compute[235132]: 2025-10-10 10:05:46.977 2 DEBUG nova.compute.resource_tracker [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:05:47 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:05:47 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1757370242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:05:47 compute-1 nova_compute[235132]: 2025-10-10 10:05:47.097 2 INFO nova.scheduler.client.report [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [req-3bd6fbfb-046b-458e-98d6-e6fa83b57d4b] Created resource provider record via placement API for resource provider with UUID c9b2c4a3-cb19-4387-8719-36027e3cdaec and name compute-1.ctlplane.example.com.
Oct 10 10:05:47 compute-1 nova_compute[235132]: 2025-10-10 10:05:47.120 2 DEBUG oslo_concurrency.processutils [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:05:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:05:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:05:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:47.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:05:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:47 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af000a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:05:47 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1942422153' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:05:47 compute-1 nova_compute[235132]: 2025-10-10 10:05:47.609 2 DEBUG oslo_concurrency.processutils [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:05:47 compute-1 nova_compute[235132]: 2025-10-10 10:05:47.616 2 DEBUG nova.virt.libvirt.host [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct 10 10:05:47 compute-1 nova_compute[235132]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Oct 10 10:05:47 compute-1 nova_compute[235132]: 2025-10-10 10:05:47.617 2 INFO nova.virt.libvirt.host [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] kernel doesn't support AMD SEV
Oct 10 10:05:47 compute-1 nova_compute[235132]: 2025-10-10 10:05:47.619 2 DEBUG nova.compute.provider_tree [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Updating inventory in ProviderTree for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 10 10:05:47 compute-1 nova_compute[235132]: 2025-10-10 10:05:47.620 2 DEBUG nova.virt.libvirt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 10 10:05:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:47.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:47 compute-1 nova_compute[235132]: 2025-10-10 10:05:47.813 2 DEBUG nova.scheduler.client.report [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Updated inventory for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Oct 10 10:05:47 compute-1 nova_compute[235132]: 2025-10-10 10:05:47.814 2 DEBUG nova.compute.provider_tree [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Updating resource provider c9b2c4a3-cb19-4387-8719-36027e3cdaec generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 10 10:05:47 compute-1 nova_compute[235132]: 2025-10-10 10:05:47.814 2 DEBUG nova.compute.provider_tree [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Updating inventory in ProviderTree for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 10 10:05:48 compute-1 nova_compute[235132]: 2025-10-10 10:05:48.013 2 DEBUG nova.compute.provider_tree [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Updating resource provider c9b2c4a3-cb19-4387-8719-36027e3cdaec generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 10 10:05:48 compute-1 nova_compute[235132]: 2025-10-10 10:05:48.070 2 DEBUG nova.compute.resource_tracker [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:05:48 compute-1 nova_compute[235132]: 2025-10-10 10:05:48.070 2 DEBUG oslo_concurrency.lockutils [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.237s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:05:48 compute-1 nova_compute[235132]: 2025-10-10 10:05:48.071 2 DEBUG nova.service [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Oct 10 10:05:48 compute-1 ceph-mon[79167]: pgmap v587: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:48 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/39140762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:05:48 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3139530236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:05:48 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1942422153' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:05:48 compute-1 nova_compute[235132]: 2025-10-10 10:05:48.196 2 DEBUG nova.service [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Oct 10 10:05:48 compute-1 nova_compute[235132]: 2025-10-10 10:05:48.197 2 DEBUG nova.servicegroup.drivers.db [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] DB_Driver: join new ServiceGroup member compute-1.ctlplane.example.com to the compute group, service = <Service: host=compute-1.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Oct 10 10:05:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:48 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:48 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:49 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/425651605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:05:49 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3204732520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:05:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:49.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:49 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:49.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:50 compute-1 ceph-mon[79167]: pgmap v588: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:50 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af000a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:50 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:51.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:51 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:51.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:52 compute-1 ceph-mon[79167]: pgmap v589: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:52 compute-1 nova_compute[235132]: 2025-10-10 10:05:52.199 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:05:52 compute-1 nova_compute[235132]: 2025-10-10 10:05:52.226 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:05:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:05:52 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:52 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:52 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:52 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af000a7e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:05:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:53.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:05:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:53 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:53.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:54 compute-1 ceph-mon[79167]: pgmap v590: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:05:54 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:54 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:54 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:54 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:55.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:55 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ad4002ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:05:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:55.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:05:55 compute-1 sudo[235533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:05:55 compute-1 sudo[235533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:05:55 compute-1 sudo[235533]: pam_unix(sudo:session): session closed for user root
Oct 10 10:05:56 compute-1 ceph-mon[79167]: pgmap v591: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:56 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:56 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:56 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:56 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:57 compute-1 podman[235558]: 2025-10-10 10:05:57.013096101 +0000 UTC m=+0.098912299 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 10 10:05:57 compute-1 ceph-mon[79167]: pgmap v592: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:05:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:05:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:57.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:57 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:57.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:58 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ad4002ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:58 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:05:59.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:05:59 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:05:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:05:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:05:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:05:59.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:05:59 compute-1 ceph-mon[79167]: pgmap v593: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:06:00 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:00 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003e50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:00 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:00 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ad4002ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:06:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:01.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:06:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:01 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:06:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:01.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:06:01 compute-1 ceph-mon[79167]: pgmap v594: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:06:01 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:06:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:06:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:02 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:02 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003e50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:03.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:03 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ad4002ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:03.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:03 compute-1 ceph-mon[79167]: pgmap v595: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:06:04 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:04 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:04 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:04 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:05.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:05 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003e50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:05.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:05 compute-1 ceph-mon[79167]: pgmap v596: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:06:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:06 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ad4002ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:06 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:06:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:06:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:07.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:06:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:07 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:07.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:07 compute-1 ceph-mon[79167]: pgmap v597: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:06:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:08 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003e50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:08 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ad40038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:09.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:09 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:09.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:09 compute-1 ceph-mon[79167]: pgmap v598: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:06:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:10 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:10 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003e50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:10 compute-1 podman[235584]: 2025-10-10 10:06:10.973591912 +0000 UTC m=+0.079170459 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 10:06:10 compute-1 podman[235585]: 2025-10-10 10:06:10.981873448 +0000 UTC m=+0.084961586 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:06:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:11.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:11 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ad40038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:06:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:11.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:06:11 compute-1 ceph-mon[79167]: pgmap v599: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:06:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:06:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:12 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:12 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:13.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:13 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003e50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:13.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:13 compute-1 ceph-mon[79167]: pgmap v600: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:06:13 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/1790607881' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:06:13 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/1790607881' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:06:13 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/2805633632' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:06:13 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/2805633632' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:06:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:14 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ad40038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:14 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:14 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/2939433712' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:06:14 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/2939433712' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:06:15 compute-1 podman[235624]: 2025-10-10 10:06:15.017936864 +0000 UTC m=+0.117779988 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 10 10:06:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:15.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:15 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:06:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:15.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:06:15 compute-1 sudo[235651]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:06:15 compute-1 sudo[235651]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:06:15 compute-1 sudo[235651]: pam_unix(sudo:session): session closed for user root
Oct 10 10:06:15 compute-1 ceph-mon[79167]: pgmap v601: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:06:16 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:16 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003e50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:16 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:16 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ad40038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:06:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:06:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:17.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:17 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:17.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:17 compute-1 ceph-mon[79167]: pgmap v602: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:06:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:18 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:18 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003e70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:19.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:19 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ad40038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:19.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:19 compute-1 ceph-mon[79167]: pgmap v603: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:06:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:20 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:20 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:21.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:21 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:21.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:21 compute-1 ceph-mon[79167]: pgmap v604: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:06:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:06:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:22 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ad40038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:22 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:23.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100623 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:06:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:23 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:23.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:24 compute-1 ceph-mon[79167]: pgmap v605: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 10:06:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:24 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3acc003eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:24 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ad40038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:06:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:25.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:06:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:25 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:06:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:25.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:06:26 compute-1 ceph-mon[79167]: pgmap v606: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:06:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:26 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:26 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac8001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:06:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:27.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:27 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:27.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:27 compute-1 podman[235684]: 2025-10-10 10:06:27.99527602 +0000 UTC m=+0.087517299 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 10 10:06:28 compute-1 ceph-mon[79167]: pgmap v607: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:06:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:28 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:28 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:29.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:29 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac8001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:29.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:30 compute-1 ceph-mon[79167]: pgmap v608: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:06:30 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:30 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:30 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:30 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:31.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:31 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:06:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:31.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:06:32 compute-1 ceph-mon[79167]: pgmap v609: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:06:32 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:06:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:06:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:32 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac8001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:32 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:06:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:32 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:33.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:33 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:33.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:34 compute-1 ceph-mon[79167]: pgmap v610: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:06:34 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:34 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:34 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:34 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac8001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:35.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:35 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:35 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:06:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:35 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:06:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:06:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:35.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:06:35 compute-1 sudo[235710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:06:35 compute-1 sudo[235710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:06:35 compute-1 sudo[235710]: pam_unix(sudo:session): session closed for user root
Oct 10 10:06:36 compute-1 ceph-mon[79167]: pgmap v611: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:06:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:36 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:36 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac8001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:06:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:37.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:37 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:37.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:38 compute-1 ceph-mon[79167]: pgmap v612: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:06:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:38 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:38 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:06:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:38 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:39.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:39 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:39.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:40 compute-1 ceph-mon[79167]: pgmap v613: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:06:40 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:40 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:40 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:40 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:41.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:41 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:41.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:42 compute-1 podman[235738]: 2025-10-10 10:06:42.006770742 +0000 UTC m=+0.100721942 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3)
Oct 10 10:06:42 compute-1 podman[235739]: 2025-10-10 10:06:42.020054646 +0000 UTC m=+0.107920219 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 10 10:06:42 compute-1 ceph-mon[79167]: pgmap v614: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:06:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:06:42.200 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:06:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:06:42.200 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:06:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:06:42.200 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:06:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:06:42 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:42 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:42 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:42 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:43.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:43 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3af0008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:43.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:44 compute-1 ceph-mon[79167]: pgmap v615: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:06:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:44 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac80036c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:44 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ae40030f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:06:45 compute-1 nova_compute[235132]: 2025-10-10 10:06:45.046 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:06:45 compute-1 nova_compute[235132]: 2025-10-10 10:06:45.046 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:06:45 compute-1 nova_compute[235132]: 2025-10-10 10:06:45.047 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:06:45 compute-1 nova_compute[235132]: 2025-10-10 10:06:45.047 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:06:45 compute-1 nova_compute[235132]: 2025-10-10 10:06:45.073 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:06:45 compute-1 nova_compute[235132]: 2025-10-10 10:06:45.074 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:06:45 compute-1 nova_compute[235132]: 2025-10-10 10:06:45.075 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:06:45 compute-1 nova_compute[235132]: 2025-10-10 10:06:45.075 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:06:45 compute-1 nova_compute[235132]: 2025-10-10 10:06:45.075 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:06:45 compute-1 nova_compute[235132]: 2025-10-10 10:06:45.076 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:06:45 compute-1 nova_compute[235132]: 2025-10-10 10:06:45.076 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:06:45 compute-1 nova_compute[235132]: 2025-10-10 10:06:45.077 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:06:45 compute-1 nova_compute[235132]: 2025-10-10 10:06:45.077 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:06:45 compute-1 nova_compute[235132]: 2025-10-10 10:06:45.113 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:06:45 compute-1 nova_compute[235132]: 2025-10-10 10:06:45.114 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:06:45 compute-1 nova_compute[235132]: 2025-10-10 10:06:45.114 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:06:45 compute-1 nova_compute[235132]: 2025-10-10 10:06:45.114 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:06:45 compute-1 nova_compute[235132]: 2025-10-10 10:06:45.115 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:06:45 compute-1 ceph-mon[79167]: pgmap v616: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:06:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:45.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100645 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:06:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[226982]: 10/10/2025 10:06:45 : epoch 68e8da37 : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3ac0003c30 fd 38 proxy ignored for local
Oct 10 10:06:45 compute-1 kernel: ganesha.nfsd[235531]: segfault at 50 ip 00007f3b9da8532e sp 00007f3b567fb210 error 4 in libntirpc.so.5.8[7f3b9da6a000+2c000] likely on CPU 7 (core 0, socket 7)
Oct 10 10:06:45 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 10:06:45 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:06:45 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3155134759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:06:45 compute-1 nova_compute[235132]: 2025-10-10 10:06:45.606 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:06:45 compute-1 systemd[1]: Started Process Core Dump (PID 235800/UID 0).
Oct 10 10:06:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:06:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:45.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:06:45 compute-1 podman[235802]: 2025-10-10 10:06:45.73387537 +0000 UTC m=+0.103787775 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 10 10:06:45 compute-1 nova_compute[235132]: 2025-10-10 10:06:45.783 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:06:45 compute-1 nova_compute[235132]: 2025-10-10 10:06:45.785 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5280MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:06:45 compute-1 nova_compute[235132]: 2025-10-10 10:06:45.785 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:06:45 compute-1 nova_compute[235132]: 2025-10-10 10:06:45.786 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:06:45 compute-1 nova_compute[235132]: 2025-10-10 10:06:45.881 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:06:45 compute-1 nova_compute[235132]: 2025-10-10 10:06:45.882 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:06:45 compute-1 nova_compute[235132]: 2025-10-10 10:06:45.911 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:06:46 compute-1 sudo[235830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:06:46 compute-1 sudo[235830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:06:46 compute-1 sudo[235830]: pam_unix(sudo:session): session closed for user root
Oct 10 10:06:46 compute-1 sudo[235874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:06:46 compute-1 sudo[235874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:06:46 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3155134759' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:06:46 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:06:46 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2009889929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:06:46 compute-1 nova_compute[235132]: 2025-10-10 10:06:46.375 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:06:46 compute-1 nova_compute[235132]: 2025-10-10 10:06:46.385 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:06:46 compute-1 nova_compute[235132]: 2025-10-10 10:06:46.407 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:06:46 compute-1 nova_compute[235132]: 2025-10-10 10:06:46.409 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:06:46 compute-1 nova_compute[235132]: 2025-10-10 10:06:46.409 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:06:46 compute-1 sudo[235874]: pam_unix(sudo:session): session closed for user root
Oct 10 10:06:46 compute-1 systemd-coredump[235803]: Process 227001 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 58:
                                                    #0  0x00007f3b9da8532e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 10 10:06:46 compute-1 systemd[1]: systemd-coredump@9-235800-0.service: Deactivated successfully.
Oct 10 10:06:46 compute-1 systemd[1]: systemd-coredump@9-235800-0.service: Consumed 1.268s CPU time.
Oct 10 10:06:46 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 10:06:47 compute-1 podman[235938]: 2025-10-10 10:06:47.014715976 +0000 UTC m=+0.031952897 container died d9857f148c0b09e0041b575e39e53db43b365e45c1430ca88b3c2539bad267b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:06:47 compute-1 systemd[1]: var-lib-containers-storage-overlay-693527e3052a34160e40c573904d9d4bb456ea383bca413c0394b60bb4ac049a-merged.mount: Deactivated successfully.
Oct 10 10:06:47 compute-1 podman[235938]: 2025-10-10 10:06:47.060070869 +0000 UTC m=+0.077307770 container remove d9857f148c0b09e0041b575e39e53db43b365e45c1430ca88b3c2539bad267b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:06:47 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Main process exited, code=exited, status=139/n/a
Oct 10 10:06:47 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Failed with result 'exit-code'.
Oct 10 10:06:47 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 1.832s CPU time.
Oct 10 10:06:47 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1920469742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:06:47 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:06:47 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2009889929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:06:47 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:06:47 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:06:47 compute-1 ceph-mon[79167]: pgmap v617: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:06:47 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:06:47 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:06:47 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:06:47 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:06:47 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:06:47 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2676868760' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:06:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:06:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:47.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:47.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:48 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3854743988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:06:49 compute-1 ceph-mon[79167]: pgmap v618: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:06:49 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1886198263' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:06:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:49.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:49.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:06:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:51.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:06:51 compute-1 sudo[235984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:06:51 compute-1 sudo[235984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:06:51 compute-1 sudo[235984]: pam_unix(sudo:session): session closed for user root
Oct 10 10:06:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100651 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:06:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:06:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:51.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:06:51 compute-1 ceph-mon[79167]: pgmap v619: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:06:51 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:06:51 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:06:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:06:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:53.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:06:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:53.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:06:53 compute-1 ceph-mon[79167]: pgmap v620: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:06:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:55.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:55.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:55 compute-1 ceph-mon[79167]: pgmap v621: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:06:56 compute-1 sudo[236011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:06:56 compute-1 sudo[236011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:06:56 compute-1 sudo[236011]: pam_unix(sudo:session): session closed for user root
Oct 10 10:06:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:06:57 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Scheduled restart job, restart counter is at 10.
Oct 10 10:06:57 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:06:57 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 1.832s CPU time.
Oct 10 10:06:57 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 10:06:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:57.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:57 compute-1 podman[236087]: 2025-10-10 10:06:57.65822644 +0000 UTC m=+0.047935135 container create 5bbefa4ea748a644be2ecf190044e93212464e29f8f68de8c12c0152f38f884e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:06:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52b5db3a1abe9ac35ae244b86c53e77450ea7623048e488d415454372713c949/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 10:06:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52b5db3a1abe9ac35ae244b86c53e77450ea7623048e488d415454372713c949/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:06:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52b5db3a1abe9ac35ae244b86c53e77450ea7623048e488d415454372713c949/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:06:57 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52b5db3a1abe9ac35ae244b86c53e77450ea7623048e488d415454372713c949/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.mssvzx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:06:57 compute-1 podman[236087]: 2025-10-10 10:06:57.723715685 +0000 UTC m=+0.113424380 container init 5bbefa4ea748a644be2ecf190044e93212464e29f8f68de8c12c0152f38f884e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 10 10:06:57 compute-1 podman[236087]: 2025-10-10 10:06:57.729080872 +0000 UTC m=+0.118789587 container start 5bbefa4ea748a644be2ecf190044e93212464e29f8f68de8c12c0152f38f884e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:06:57 compute-1 bash[236087]: 5bbefa4ea748a644be2ecf190044e93212464e29f8f68de8c12c0152f38f884e
Oct 10 10:06:57 compute-1 podman[236087]: 2025-10-10 10:06:57.637557434 +0000 UTC m=+0.027266149 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:06:57 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:06:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:57.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:06:57 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 10:06:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:06:57 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 10:06:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:06:57 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 10:06:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:06:57 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 10:06:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:06:57 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 10:06:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:06:57 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 10:06:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:06:57 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 10:06:57 compute-1 ceph-mon[79167]: pgmap v622: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:06:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:06:57 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:06:58 compute-1 podman[236144]: 2025-10-10 10:06:58.968898394 +0000 UTC m=+0.072824887 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Oct 10 10:06:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:06:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:06:59.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:06:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:06:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:06:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:06:59.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:06:59 compute-1 ceph-mon[79167]: pgmap v623: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:07:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:07:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:01.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:07:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:01.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:01 compute-1 ceph-mon[79167]: pgmap v624: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:07:01 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:07:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:07:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:03.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:07:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:03.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:07:03 compute-1 ceph-mon[79167]: pgmap v625: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Oct 10 10:07:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:03 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:07:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:03 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:07:05 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Oct 10 10:07:05 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/919671696' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 10 10:07:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:07:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:05.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:07:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:07:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:05.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:07:05 compute-1 ceph-mon[79167]: pgmap v626: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 596 B/s wr, 1 op/s
Oct 10 10:07:05 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/919671696' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 10 10:07:05 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/4284610426' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 10 10:07:06 compute-1 rsyslogd[1005]: imjournal: 1338 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 10 10:07:06 compute-1 ceph-mon[79167]: from='client.24518 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 10 10:07:06 compute-1 ceph-mon[79167]: from='client.24518 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Oct 10 10:07:06 compute-1 ceph-mon[79167]: from='client.24748 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 10 10:07:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:07:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:07.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100707 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:07:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:07.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:07 compute-1 ceph-mon[79167]: pgmap v627: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 10:07:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:09.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:07:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:09.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:07:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:07:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 10:07:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 10:07:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 10:07:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 10:07:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 10:07:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 10:07:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:07:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:07:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:07:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 10:07:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:07:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 10:07:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 10:07:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 10:07:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 10:07:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 10:07:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 10:07:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 10:07:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 10:07:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 10:07:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 10:07:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 10:07:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 10:07:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 10:07:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 10:07:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 10:07:09 compute-1 ceph-mon[79167]: pgmap v628: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 938 B/s wr, 152 op/s
Oct 10 10:07:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:10 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe8000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:10 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:07:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:11.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:07:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:11 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:07:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:11.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:07:12 compute-1 ceph-mon[79167]: pgmap v629: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 938 B/s wr, 152 op/s
Oct 10 10:07:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:07:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:12 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:12 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:12 compute-1 podman[236186]: 2025-10-10 10:07:12.968258945 +0000 UTC m=+0.063388477 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=multipathd)
Oct 10 10:07:13 compute-1 podman[236185]: 2025-10-10 10:07:13.003604834 +0000 UTC m=+0.097753540 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid)
Oct 10 10:07:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:13.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100713 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:07:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:13 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0001680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:07:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:13.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:07:14 compute-1 ceph-mon[79167]: pgmap v630: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 938 B/s wr, 152 op/s
Oct 10 10:07:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:14 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:14 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:15.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:15 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:15.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:16 compute-1 ceph-mon[79167]: pgmap v631: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 341 B/s wr, 150 op/s
Oct 10 10:07:16 compute-1 podman[236228]: 2025-10-10 10:07:16.047780188 +0000 UTC m=+0.143800052 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 10 10:07:16 compute-1 sudo[236256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:07:16 compute-1 sudo[236256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:07:16 compute-1 sudo[236256]: pam_unix(sudo:session): session closed for user root
Oct 10 10:07:16 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:16 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0001680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:16 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:16 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:07:16 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:16 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:17 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:07:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:07:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:17.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:17 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:17.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:18 compute-1 ceph-mon[79167]: pgmap v632: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 341 B/s wr, 150 op/s
Oct 10 10:07:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:18 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:18 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0001680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:19.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:19 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:19 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:07:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:19 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:07:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:19.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:20 compute-1 ceph-mon[79167]: pgmap v633: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 938 B/s wr, 152 op/s
Oct 10 10:07:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:20 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:20 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:21.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:21 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:21.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:22 compute-1 ceph-mon[79167]: pgmap v634: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Oct 10 10:07:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:07:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:22 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:22 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:07:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100722 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:07:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:22 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:23.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:23 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:07:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:23.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:07:24 compute-1 ceph-mon[79167]: pgmap v635: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 852 B/s wr, 2 op/s
Oct 10 10:07:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:24 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:24 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:25 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/3351629822' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 10 10:07:25 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/1743770823' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 10 10:07:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:25.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:25 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:25.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:26 compute-1 ceph-mon[79167]: pgmap v636: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 852 B/s wr, 2 op/s
Oct 10 10:07:26 compute-1 ceph-mon[79167]: from='client.15102 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 10 10:07:26 compute-1 ceph-mon[79167]: from='client.15102 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Oct 10 10:07:26 compute-1 ceph-mon[79167]: from='client.24766 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 10 10:07:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:26 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:26 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/1938819953' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:07:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/1938819953' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:07:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:07:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:07:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:27.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:07:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:27 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:27.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:28 compute-1 ceph-mon[79167]: pgmap v637: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 852 B/s wr, 2 op/s
Oct 10 10:07:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:28 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:28 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:07:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:29.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:07:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:29 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:07:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:29.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:07:29 compute-1 podman[236288]: 2025-10-10 10:07:29.98989772 +0000 UTC m=+0.086756589 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 10 10:07:30 compute-1 ceph-mon[79167]: pgmap v638: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:07:30 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:30 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:30 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:30 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:31 compute-1 ceph-mon[79167]: pgmap v639: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 341 B/s wr, 1 op/s
Oct 10 10:07:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:07:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:31.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:07:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:31 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:31.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:31 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:07:32 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:07:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:07:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:32 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:32 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:33 compute-1 ceph-mon[79167]: pgmap v640: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Oct 10 10:07:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:33.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:33 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:33.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:34 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:34 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:34 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:34 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:34 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:34 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:07:34 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:34 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:07:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:35.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:35 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:35.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:35 compute-1 ceph-mon[79167]: pgmap v641: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s
Oct 10 10:07:36 compute-1 sudo[236311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:07:36 compute-1 sudo[236311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:07:36 compute-1 sudo[236311]: pam_unix(sudo:session): session closed for user root
Oct 10 10:07:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:36 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:36 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:07:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:37.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:37 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:37.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:37 compute-1 ceph-mon[79167]: pgmap v642: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 511 B/s wr, 1 op/s
Oct 10 10:07:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:38 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:38 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:39.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:39 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:07:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:39 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:07:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:39 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:07:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:39.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:07:39 compute-1 ceph-mon[79167]: pgmap v643: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s
Oct 10 10:07:40 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:40 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:40 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:40 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:41.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:41 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:41.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:41 compute-1 ceph-mon[79167]: pgmap v644: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:07:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:07:42.201 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:07:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:07:42.201 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:07:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:07:42.202 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:07:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:07:42 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:42 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:42 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:42 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:07:42 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:42 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe8000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:43.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:43 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:43.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:43 compute-1 ceph-mon[79167]: pgmap v645: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 767 B/s wr, 3 op/s
Oct 10 10:07:43 compute-1 podman[236343]: 2025-10-10 10:07:43.980798485 +0000 UTC m=+0.073785812 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:07:43 compute-1 podman[236342]: 2025-10-10 10:07:43.980691332 +0000 UTC m=+0.064030365 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:07:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:44 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:44 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:45.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:45 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe8001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:45 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:07:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:45.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:45 compute-1 ceph-mon[79167]: pgmap v646: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 341 B/s wr, 2 op/s
Oct 10 10:07:46 compute-1 nova_compute[235132]: 2025-10-10 10:07:46.401 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:07:46 compute-1 nova_compute[235132]: 2025-10-10 10:07:46.402 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:07:46 compute-1 nova_compute[235132]: 2025-10-10 10:07:46.423 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:07:46 compute-1 nova_compute[235132]: 2025-10-10 10:07:46.423 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:07:46 compute-1 nova_compute[235132]: 2025-10-10 10:07:46.457 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:07:46 compute-1 nova_compute[235132]: 2025-10-10 10:07:46.457 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:07:46 compute-1 nova_compute[235132]: 2025-10-10 10:07:46.457 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:07:46 compute-1 nova_compute[235132]: 2025-10-10 10:07:46.458 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:07:46 compute-1 nova_compute[235132]: 2025-10-10 10:07:46.458 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:07:46 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:46 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8001930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:46 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:46 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:46 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:07:46 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:07:46 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4016404872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:07:46 compute-1 nova_compute[235132]: 2025-10-10 10:07:46.986 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:07:47 compute-1 podman[236401]: 2025-10-10 10:07:47.008687193 +0000 UTC m=+0.118091337 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Oct 10 10:07:47 compute-1 nova_compute[235132]: 2025-10-10 10:07:47.165 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:07:47 compute-1 nova_compute[235132]: 2025-10-10 10:07:47.166 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5223MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:07:47 compute-1 nova_compute[235132]: 2025-10-10 10:07:47.167 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:07:47 compute-1 nova_compute[235132]: 2025-10-10 10:07:47.167 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:07:47 compute-1 nova_compute[235132]: 2025-10-10 10:07:47.241 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:07:47 compute-1 nova_compute[235132]: 2025-10-10 10:07:47.242 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:07:47 compute-1 nova_compute[235132]: 2025-10-10 10:07:47.270 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:07:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:07:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:47.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:47 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:07:47 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3155772583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:07:47 compute-1 nova_compute[235132]: 2025-10-10 10:07:47.741 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:07:47 compute-1 nova_compute[235132]: 2025-10-10 10:07:47.748 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:07:47 compute-1 nova_compute[235132]: 2025-10-10 10:07:47.763 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:07:47 compute-1 nova_compute[235132]: 2025-10-10 10:07:47.765 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:07:47 compute-1 nova_compute[235132]: 2025-10-10 10:07:47.765 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:07:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:47.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:47 compute-1 ceph-mon[79167]: pgmap v647: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 341 B/s wr, 2 op/s
Oct 10 10:07:47 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/4016404872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:07:47 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3155772583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:07:47 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1191965780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:07:48 compute-1 nova_compute[235132]: 2025-10-10 10:07:48.386 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:07:48 compute-1 nova_compute[235132]: 2025-10-10 10:07:48.386 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:07:48 compute-1 nova_compute[235132]: 2025-10-10 10:07:48.387 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:07:48 compute-1 nova_compute[235132]: 2025-10-10 10:07:48.406 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:07:48 compute-1 nova_compute[235132]: 2025-10-10 10:07:48.406 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:07:48 compute-1 nova_compute[235132]: 2025-10-10 10:07:48.407 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:07:48 compute-1 nova_compute[235132]: 2025-10-10 10:07:48.407 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:07:48 compute-1 nova_compute[235132]: 2025-10-10 10:07:48.407 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:07:48 compute-1 nova_compute[235132]: 2025-10-10 10:07:48.407 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:07:48 compute-1 nova_compute[235132]: 2025-10-10 10:07:48.408 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:07:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:48 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe8001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:48 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:07:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:48 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:07:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100748 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:07:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:48 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8001930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:48 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/85312783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:07:48 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3798602020' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:07:48 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/4252308067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:07:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:07:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:49.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:07:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:49 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:49.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:49 compute-1 ceph-mon[79167]: pgmap v648: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Oct 10 10:07:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:50 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:50 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe8001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:51.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:51 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8001930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:51 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:07:51 compute-1 sudo[236454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:07:51 compute-1 sudo[236454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:07:51 compute-1 sudo[236454]: pam_unix(sudo:session): session closed for user root
Oct 10 10:07:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:51.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:51 compute-1 sudo[236479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:07:51 compute-1 sudo[236479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:07:52 compute-1 ceph-mon[79167]: pgmap v649: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Oct 10 10:07:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:07:52 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:52 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:52 compute-1 sudo[236479]: pam_unix(sudo:session): session closed for user root
Oct 10 10:07:52 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:52 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:53 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:07:53 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:07:53 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:07:53 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:07:53 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:07:53 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:07:53 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:07:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:53.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:53 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe8001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:07:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:53.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:07:54 compute-1 ceph-mon[79167]: pgmap v650: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Oct 10 10:07:54 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:54 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8002da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:54 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:54 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:55.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100755 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:07:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:55 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:55.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:56 compute-1 ceph-mon[79167]: pgmap v651: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:07:56 compute-1 sudo[236537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:07:56 compute-1 sudo[236537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:07:56 compute-1 sudo[236537]: pam_unix(sudo:session): session closed for user root
Oct 10 10:07:56 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:56 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:56 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:56 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8002da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:07:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:57.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:07:57.549539) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090877549589, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2357, "num_deletes": 251, "total_data_size": 6246893, "memory_usage": 6341456, "flush_reason": "Manual Compaction"}
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090877574398, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 4069084, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20749, "largest_seqno": 23101, "table_properties": {"data_size": 4059605, "index_size": 5973, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19602, "raw_average_key_size": 20, "raw_value_size": 4040653, "raw_average_value_size": 4165, "num_data_blocks": 262, "num_entries": 970, "num_filter_entries": 970, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760090666, "oldest_key_time": 1760090666, "file_creation_time": 1760090877, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 24906 microseconds, and 17074 cpu microseconds.
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:07:57.574448) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 4069084 bytes OK
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:07:57.574470) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:07:57.575934) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:07:57.575960) EVENT_LOG_v1 {"time_micros": 1760090877575952, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:07:57.575985) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 6236419, prev total WAL file size 6272944, number of live WAL files 2.
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:07:57.578839) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(3973KB)], [39(12MB)]
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090877578918, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 16934713, "oldest_snapshot_seqno": -1}
Oct 10 10:07:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:57 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8002da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 5421 keys, 14721572 bytes, temperature: kUnknown
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090877656117, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 14721572, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14683316, "index_size": 23618, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13573, "raw_key_size": 136712, "raw_average_key_size": 25, "raw_value_size": 14583102, "raw_average_value_size": 2690, "num_data_blocks": 976, "num_entries": 5421, "num_filter_entries": 5421, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760090877, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:07:57.656613) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 14721572 bytes
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:07:57.658076) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 219.0 rd, 190.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 12.3 +0.0 blob) out(14.0 +0.0 blob), read-write-amplify(7.8) write-amplify(3.6) OK, records in: 5941, records dropped: 520 output_compression: NoCompression
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:07:57.658106) EVENT_LOG_v1 {"time_micros": 1760090877658093, "job": 22, "event": "compaction_finished", "compaction_time_micros": 77344, "compaction_time_cpu_micros": 53724, "output_level": 6, "num_output_files": 1, "total_output_size": 14721572, "num_input_records": 5941, "num_output_records": 5421, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090877659542, "job": 22, "event": "table_file_deletion", "file_number": 41}
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090877664351, "job": 22, "event": "table_file_deletion", "file_number": 39}
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:07:57.578683) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:07:57.664490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:07:57.664498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:07:57.664501) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:07:57.664504) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:07:57 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:07:57.664506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:07:57 compute-1 sudo[236563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:07:57 compute-1 sudo[236563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:07:57 compute-1 sudo[236563]: pam_unix(sudo:session): session closed for user root
Oct 10 10:07:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:57.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:58 compute-1 ceph-mon[79167]: pgmap v652: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:07:58 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:07:58 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:07:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:58 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:58 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:07:59.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:07:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:07:59 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:07:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:07:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:07:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:07:59.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:00 compute-1 ceph-mon[79167]: pgmap v653: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:08:00 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:00 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:00 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:00 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:00 compute-1 podman[236589]: 2025-10-10 10:08:00.962476634 +0000 UTC m=+0.066916374 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 10 10:08:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:08:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:01.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:08:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:01 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:01.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:02 compute-1 ceph-mon[79167]: pgmap v654: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 10 10:08:02 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:08:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:08:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:02 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:02 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:03.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:03 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:03.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:04 compute-1 ceph-mon[79167]: pgmap v655: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s
Oct 10 10:08:04 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:04 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:04 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:04 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:05.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:05 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:05.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:06 compute-1 ceph-mon[79167]: pgmap v656: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:06 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:06 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:08:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:07.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:07 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:08:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:07.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:08:08 compute-1 ceph-mon[79167]: pgmap v657: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:08 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:08 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:09.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:08:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:09.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:08:10 compute-1 ceph-mon[79167]: pgmap v658: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:10 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:10 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:08:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:11.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:08:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:11 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:11.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:12 compute-1 ceph-mon[79167]: pgmap v659: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:08:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:12 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:12 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:13 compute-1 ceph-mon[79167]: pgmap v660: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:08:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:13.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:13 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc0002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:08:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:13.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:08:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:14 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:14 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:14 compute-1 podman[236619]: 2025-10-10 10:08:14.968192578 +0000 UTC m=+0.073146506 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:08:14 compute-1 podman[236620]: 2025-10-10 10:08:14.999777424 +0000 UTC m=+0.087981243 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd)
Oct 10 10:08:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:15.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:15 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:15 compute-1 ceph-mon[79167]: pgmap v661: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:15.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:16 compute-1 sudo[236661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:08:16 compute-1 sudo[236661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:08:16 compute-1 sudo[236661]: pam_unix(sudo:session): session closed for user root
Oct 10 10:08:16 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:16 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc0002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:16 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:16 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0001fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:08:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:08:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:08:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:17.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:08:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:17 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:08:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:17.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:08:17 compute-1 ceph-mon[79167]: pgmap v662: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:18 compute-1 podman[236687]: 2025-10-10 10:08:18.014277414 +0000 UTC m=+0.106204131 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 10 10:08:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:18 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:18 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc0002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:08:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:19.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:08:19 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:08:19.604 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:08:19 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:08:19.606 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 10 10:08:19 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:08:19.607 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ee0899c1-415d-4aa8-abe8-1240b4e8bf2c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:08:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:19 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0001fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:19.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:19 compute-1 ceph-mon[79167]: pgmap v663: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:20 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:20 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:21.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:21 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc0002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:21.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:21 compute-1 ceph-mon[79167]: pgmap v664: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:08:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:22 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0002a30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:22 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:08:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:23.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:08:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:23 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:23.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:23 compute-1 ceph-mon[79167]: pgmap v665: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:08:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:24 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc0002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:24 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0002a30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:25.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:25 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:25.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:25 compute-1 ceph-mon[79167]: pgmap v666: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:26 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:26 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc00036c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:26 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/954143266' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:08:26 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/954143266' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:08:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:08:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:08:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:27.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:08:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:27 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0002a30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:08:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:27.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:08:27 compute-1 ceph-mon[79167]: pgmap v667: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:28 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:28 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:29.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:29 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc00036c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:08:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:29.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:08:30 compute-1 ceph-mon[79167]: pgmap v668: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:30 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:30 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:30 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:30 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:31.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:31 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:31.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:31 compute-1 podman[236720]: 2025-10-10 10:08:31.977308616 +0000 UTC m=+0.078061230 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 10 10:08:32 compute-1 ceph-mon[79167]: pgmap v669: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:32 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:08:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:08:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:32 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc00036c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:32 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:08:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:33.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:08:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:33 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:33.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:34 compute-1 ceph-mon[79167]: pgmap v670: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:08:34 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:34 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:34 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:34 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc00036c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:35.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:35 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:35.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:36 compute-1 ceph-mon[79167]: pgmap v671: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:36 compute-1 sudo[236741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:08:36 compute-1 sudo[236741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:08:36 compute-1 sudo[236741]: pam_unix(sudo:session): session closed for user root
Oct 10 10:08:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:36 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:36 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:08:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:37.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:37 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc00036c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:08:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:37.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:08:38 compute-1 ceph-mon[79167]: pgmap v672: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:38 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:38 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:39.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:39 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:08:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:39.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:08:40 compute-1 ceph-mon[79167]: pgmap v673: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:40 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:40 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc00036c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:40 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:40 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:08:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:41.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:08:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:41 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:41.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:08:42.202 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:08:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:08:42.202 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:08:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:08:42.202 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:08:42 compute-1 ceph-mon[79167]: pgmap v674: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:08:42 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:42 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd8003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:42 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:42 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc00036c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:43.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:43 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:43.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:44 compute-1 ceph-mon[79167]: pgmap v675: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:08:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:44 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:44 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc40014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:45 compute-1 ceph-mon[79167]: pgmap v676: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:08:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:45.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:08:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:45 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc40014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:45.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:45 compute-1 podman[236773]: 2025-10-10 10:08:45.967146535 +0000 UTC m=+0.067249257 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 10:08:45 compute-1 podman[236774]: 2025-10-10 10:08:45.966795816 +0000 UTC m=+0.066900538 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 10:08:46 compute-1 nova_compute[235132]: 2025-10-10 10:08:46.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:08:46 compute-1 nova_compute[235132]: 2025-10-10 10:08:46.046 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:08:46 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:08:46 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:46 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:46 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:46 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:47 compute-1 nova_compute[235132]: 2025-10-10 10:08:47.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:08:47 compute-1 nova_compute[235132]: 2025-10-10 10:08:47.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:08:47 compute-1 nova_compute[235132]: 2025-10-10 10:08:47.073 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:08:47 compute-1 nova_compute[235132]: 2025-10-10 10:08:47.073 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:08:47 compute-1 nova_compute[235132]: 2025-10-10 10:08:47.074 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:08:47 compute-1 nova_compute[235132]: 2025-10-10 10:08:47.074 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:08:47 compute-1 nova_compute[235132]: 2025-10-10 10:08:47.074 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:08:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:08:47 compute-1 ceph-mon[79167]: pgmap v677: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:08:47 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3922966226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:08:47 compute-1 nova_compute[235132]: 2025-10-10 10:08:47.556 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:08:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:47.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:47 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:47 compute-1 nova_compute[235132]: 2025-10-10 10:08:47.719 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:08:47 compute-1 nova_compute[235132]: 2025-10-10 10:08:47.721 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5231MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:08:47 compute-1 nova_compute[235132]: 2025-10-10 10:08:47.722 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:08:47 compute-1 nova_compute[235132]: 2025-10-10 10:08:47.722 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:08:47 compute-1 nova_compute[235132]: 2025-10-10 10:08:47.830 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:08:47 compute-1 nova_compute[235132]: 2025-10-10 10:08:47.831 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:08:47 compute-1 nova_compute[235132]: 2025-10-10 10:08:47.848 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:08:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:47.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:48 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:08:48 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2549355807' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:08:48 compute-1 nova_compute[235132]: 2025-10-10 10:08:48.366 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:08:48 compute-1 nova_compute[235132]: 2025-10-10 10:08:48.372 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:08:48 compute-1 nova_compute[235132]: 2025-10-10 10:08:48.387 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:08:48 compute-1 nova_compute[235132]: 2025-10-10 10:08:48.388 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:08:48 compute-1 nova_compute[235132]: 2025-10-10 10:08:48.388 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:08:48 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3922966226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:08:48 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1968454522' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:08:48 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2549355807' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:08:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:48 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc40023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:48 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:49 compute-1 podman[236858]: 2025-10-10 10:08:49.029362796 +0000 UTC m=+0.133250140 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Oct 10 10:08:49 compute-1 nova_compute[235132]: 2025-10-10 10:08:49.388 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:08:49 compute-1 nova_compute[235132]: 2025-10-10 10:08:49.388 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:08:49 compute-1 nova_compute[235132]: 2025-10-10 10:08:49.389 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:08:49 compute-1 nova_compute[235132]: 2025-10-10 10:08:49.403 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:08:49 compute-1 nova_compute[235132]: 2025-10-10 10:08:49.404 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:08:49 compute-1 nova_compute[235132]: 2025-10-10 10:08:49.405 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:08:49 compute-1 nova_compute[235132]: 2025-10-10 10:08:49.406 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:08:49 compute-1 nova_compute[235132]: 2025-10-10 10:08:49.406 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:08:49 compute-1 nova_compute[235132]: 2025-10-10 10:08:49.406 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:08:49 compute-1 ceph-mon[79167]: pgmap v678: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:49 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/4155216318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:08:49 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2040447484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:08:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:49.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:49 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:49.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:50 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1585741613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:08:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:50 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:50 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc40023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:51 compute-1 ceph-mon[79167]: pgmap v679: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:08:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:51.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:51 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc40023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:08:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:51.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:08:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:08:52 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:52 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:52 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:52 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:08:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:53.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:08:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:53 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:53 compute-1 ceph-mon[79167]: pgmap v680: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 10:08:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:53.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:54 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:54 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4003b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:54 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:54 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe800a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:55.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:55 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:55 compute-1 ceph-mon[79167]: pgmap v681: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:08:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:55.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:56 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:56 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:56 compute-1 sudo[236888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:08:56 compute-1 sudo[236888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:08:56 compute-1 sudo[236888]: pam_unix(sudo:session): session closed for user root
Oct 10 10:08:56 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:56 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4003b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:08:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100857 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:08:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:08:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:57.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:08:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:57 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe800a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:57 compute-1 ceph-mon[79167]: pgmap v682: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:08:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:08:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:57.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:08:57 compute-1 sudo[236914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:08:57 compute-1 sudo[236914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:08:58 compute-1 sudo[236914]: pam_unix(sudo:session): session closed for user root
Oct 10 10:08:58 compute-1 sudo[236939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 10 10:08:58 compute-1 sudo[236939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:08:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:58 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:58 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:58 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 10 10:08:58 compute-1 podman[237040]: 2025-10-10 10:08:58.920939184 +0000 UTC m=+0.110674121 container exec 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:08:59 compute-1 podman[237040]: 2025-10-10 10:08:59.042901308 +0000 UTC m=+0.232636225 container exec_died 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Oct 10 10:08:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:08:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:08:59.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:08:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:08:59 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe800a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:08:59 compute-1 podman[237163]: 2025-10-10 10:08:59.740106696 +0000 UTC m=+0.070252608 container exec db44b00524aaf85460d5de4878ad14e99cea939212a287f5217a7de1b31f7321 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 10:08:59 compute-1 podman[237163]: 2025-10-10 10:08:59.747771844 +0000 UTC m=+0.077917736 container exec_died db44b00524aaf85460d5de4878ad14e99cea939212a287f5217a7de1b31f7321 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 10:08:59 compute-1 ceph-mon[79167]: pgmap v683: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:08:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:08:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:08:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:08:59.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:09:00 compute-1 podman[237253]: 2025-10-10 10:09:00.148177347 +0000 UTC m=+0.077369130 container exec 5bbefa4ea748a644be2ecf190044e93212464e29f8f68de8c12c0152f38f884e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 10 10:09:00 compute-1 podman[237253]: 2025-10-10 10:09:00.162963527 +0000 UTC m=+0.092155330 container exec_died 5bbefa4ea748a644be2ecf190044e93212464e29f8f68de8c12c0152f38f884e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325)
Oct 10 10:09:00 compute-1 podman[237319]: 2025-10-10 10:09:00.471379216 +0000 UTC m=+0.080791142 container exec 7ce1be4884f313700b9f3fe6c66e14b163c4d6bf31089a45b751a6389f8be562 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw)
Oct 10 10:09:00 compute-1 podman[237319]: 2025-10-10 10:09:00.506881025 +0000 UTC m=+0.116292891 container exec_died 7ce1be4884f313700b9f3fe6c66e14b163c4d6bf31089a45b751a6389f8be562 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw)
Oct 10 10:09:00 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:00 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4003b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:00 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:00 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe800a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:00 compute-1 podman[237388]: 2025-10-10 10:09:00.830178997 +0000 UTC m=+0.078120501 container exec 8efe4c511a9ca69c4fbe77af128c823fd37bedd1e530d3bfb1fc1044e25a5138 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, name=keepalived, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.buildah.version=1.28.2)
Oct 10 10:09:00 compute-1 podman[237388]: 2025-10-10 10:09:00.907642968 +0000 UTC m=+0.155584422 container exec_died 8efe4c511a9ca69c4fbe77af128c823fd37bedd1e530d3bfb1fc1044e25a5138 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp, description=keepalived for Ceph, io.buildah.version=1.28.2, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, name=keepalived, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, version=2.2.4)
Oct 10 10:09:00 compute-1 sudo[236939]: pam_unix(sudo:session): session closed for user root
Oct 10 10:09:01 compute-1 sudo[237422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:09:01 compute-1 sudo[237422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:09:01 compute-1 sudo[237422]: pam_unix(sudo:session): session closed for user root
Oct 10 10:09:01 compute-1 sudo[237447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:09:01 compute-1 sudo[237447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:09:01 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:09:01 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:09:01 compute-1 ceph-mon[79167]: pgmap v684: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:09:01 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:09:01 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:09:01 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 10 10:09:01 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:09:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:09:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:01.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:09:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:01 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:01 compute-1 sudo[237447]: pam_unix(sudo:session): session closed for user root
Oct 10 10:09:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:09:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:01.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:09:02 compute-1 PackageKit[168620]: daemon quit
Oct 10 10:09:02 compute-1 systemd[1]: packagekit.service: Deactivated successfully.
Oct 10 10:09:02 compute-1 podman[237504]: 2025-10-10 10:09:02.363868107 +0000 UTC m=+0.080832644 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:09:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:09:02 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 10 10:09:02 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:09:02 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:09:02 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:09:02 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:09:02 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:09:02 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:09:02 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:09:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:02 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:02 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4003b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:03 compute-1 ceph-mon[79167]: pgmap v685: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 10 10:09:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:09:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:03.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:09:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:03 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4003b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:03.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:04 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:04 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:04 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:04 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe800a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:09:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:05.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:09:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:05 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:05 compute-1 ceph-mon[79167]: pgmap v686: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:09:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:05.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:06 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4003b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:06 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:09:06 compute-1 sudo[237525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:09:06 compute-1 sudo[237525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:09:06 compute-1 sudo[237525]: pam_unix(sudo:session): session closed for user root
Oct 10 10:09:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:06 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:09:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:09:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:07.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:09:07 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:09:07 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:09:07 compute-1 ceph-mon[79167]: pgmap v687: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:09:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:07 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe800a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:09:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:07.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:09:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:08 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:08 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4003b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:09.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:09:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:09:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:09 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:09:09 compute-1 ceph-mon[79167]: pgmap v688: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 10:09:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:09.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:10 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe800a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:10 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:09:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:11.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:09:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:11 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4003b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:11 compute-1 ceph-mon[79167]: pgmap v689: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 10:09:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:11.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:09:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:12 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:12 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:09:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:12 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe800a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:13.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:13 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effb8003770 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:13 compute-1 ceph-mon[79167]: pgmap v690: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:09:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:13.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:14 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc4003b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:14 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:15.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:15 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe800a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:15 compute-1 ceph-mon[79167]: pgmap v691: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:09:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:09:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:15.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:09:16 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:16 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc0001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:16 compute-1 sudo[237557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:09:16 compute-1 sudo[237557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:09:16 compute-1 sudo[237557]: pam_unix(sudo:session): session closed for user root
Oct 10 10:09:16 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:16 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd80027e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:16 compute-1 podman[237582]: 2025-10-10 10:09:16.882304905 +0000 UTC m=+0.091450692 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 10 10:09:16 compute-1 podman[237581]: 2025-10-10 10:09:16.887795723 +0000 UTC m=+0.096230841 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid)
Oct 10 10:09:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:09:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:09:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:09:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:17.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:09:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:17 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd0003b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:09:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:17.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:09:17 compute-1 ceph-mon[79167]: pgmap v692: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:09:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:18 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effe800a640 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:18 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effc0001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100919 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:09:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:19.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[236102]: 10/10/2025 10:09:19 : epoch 68e8dac1 : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effd80027e0 fd 39 proxy ignored for local
Oct 10 10:09:19 compute-1 kernel: ganesha.nfsd[237556]: segfault at 50 ip 00007f009720d32e sp 00007f005bffe210 error 4 in libntirpc.so.5.8[7f00971f2000+2c000] likely on CPU 5 (core 0, socket 5)
Oct 10 10:09:19 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 10:09:19 compute-1 systemd[1]: Started Process Core Dump (PID 237624/UID 0).
Oct 10 10:09:19 compute-1 podman[237625]: 2025-10-10 10:09:19.85027628 +0000 UTC m=+0.126157928 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 10 10:09:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:19.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:20 compute-1 ceph-mon[79167]: pgmap v693: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 4 op/s
Oct 10 10:09:20 compute-1 systemd-coredump[237626]: Process 236106 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 63:
                                                    #0  0x00007f009720d32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 10 10:09:21 compute-1 systemd[1]: systemd-coredump@10-237624-0.service: Deactivated successfully.
Oct 10 10:09:21 compute-1 systemd[1]: systemd-coredump@10-237624-0.service: Consumed 1.290s CPU time.
Oct 10 10:09:21 compute-1 podman[237656]: 2025-10-10 10:09:21.140051263 +0000 UTC m=+0.029485167 container died 5bbefa4ea748a644be2ecf190044e93212464e29f8f68de8c12c0152f38f884e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:09:21 compute-1 systemd[1]: var-lib-containers-storage-overlay-52b5db3a1abe9ac35ae244b86c53e77450ea7623048e488d415454372713c949-merged.mount: Deactivated successfully.
Oct 10 10:09:21 compute-1 podman[237656]: 2025-10-10 10:09:21.18624493 +0000 UTC m=+0.075678814 container remove 5bbefa4ea748a644be2ecf190044e93212464e29f8f68de8c12c0152f38f884e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:09:21 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Main process exited, code=exited, status=139/n/a
Oct 10 10:09:21 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Failed with result 'exit-code'.
Oct 10 10:09:21 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 1.924s CPU time.
Oct 10 10:09:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:09:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:21.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:09:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:09:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:21.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:09:22 compute-1 ceph-mon[79167]: pgmap v694: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Oct 10 10:09:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:09:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:23.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:09:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:23.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:09:24 compute-1 ceph-mon[79167]: pgmap v695: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Oct 10 10:09:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:25.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100925 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:09:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:25.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:26 compute-1 ceph-mon[79167]: pgmap v696: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:09:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/3145259585' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:09:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/3145259585' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:09:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:09:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:27.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:09:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:27.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:09:28 compute-1 ceph-mon[79167]: pgmap v697: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:09:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:29.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:09:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:29.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:09:30 compute-1 ceph-mon[79167]: pgmap v698: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:09:31 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Scheduled restart job, restart counter is at 11.
Oct 10 10:09:31 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:09:31 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 1.924s CPU time.
Oct 10 10:09:31 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 10:09:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:31.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:31 compute-1 podman[237753]: 2025-10-10 10:09:31.942849119 +0000 UTC m=+0.056926959 container create bb8f660440db20e3dee90d5ebe6f94cf8960586a923c3f7c6dcbc88c1999fc81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:09:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:31.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:32 compute-1 podman[237753]: 2025-10-10 10:09:31.914083652 +0000 UTC m=+0.028161532 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:09:32 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e0b4c544a8cdc3e9d0224c9c86fb6c5d3c39a448c6f94c436a9a9058981680/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 10:09:32 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e0b4c544a8cdc3e9d0224c9c86fb6c5d3c39a448c6f94c436a9a9058981680/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:09:32 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e0b4c544a8cdc3e9d0224c9c86fb6c5d3c39a448c6f94c436a9a9058981680/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:09:32 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e0b4c544a8cdc3e9d0224c9c86fb6c5d3c39a448c6f94c436a9a9058981680/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.mssvzx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:09:32 compute-1 podman[237753]: 2025-10-10 10:09:32.03395986 +0000 UTC m=+0.148037770 container init bb8f660440db20e3dee90d5ebe6f94cf8960586a923c3f7c6dcbc88c1999fc81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 10:09:32 compute-1 podman[237753]: 2025-10-10 10:09:32.0461651 +0000 UTC m=+0.160242940 container start bb8f660440db20e3dee90d5ebe6f94cf8960586a923c3f7c6dcbc88c1999fc81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:09:32 compute-1 bash[237753]: bb8f660440db20e3dee90d5ebe6f94cf8960586a923c3f7c6dcbc88c1999fc81
Oct 10 10:09:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:32 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 10:09:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:32 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 10:09:32 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:09:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:32 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 10:09:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:32 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 10:09:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:32 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 10:09:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:32 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 10:09:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:32 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 10:09:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:32 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:09:32 compute-1 ceph-mon[79167]: pgmap v699: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 10 10:09:32 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:09:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:09:32 compute-1 podman[237811]: 2025-10-10 10:09:32.990903914 +0000 UTC m=+0.090482766 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 10 10:09:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:33.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:34.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:34 compute-1 ceph-mon[79167]: pgmap v700: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:09:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:35.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:36.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:36 compute-1 ceph-mon[79167]: pgmap v701: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:09:36 compute-1 sudo[237832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:09:36 compute-1 sudo[237832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:09:36 compute-1 sudo[237832]: pam_unix(sudo:session): session closed for user root
Oct 10 10:09:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:09:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:37.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:38.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:38 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:09:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:38 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:09:38 compute-1 ceph-mon[79167]: pgmap v702: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:09:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:39.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:40.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:40 compute-1 ceph-mon[79167]: pgmap v703: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 10:09:41 compute-1 ceph-mon[79167]: pgmap v704: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct 10 10:09:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:41.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:42.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:09:42.203 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:09:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:09:42.203 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:09:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:09:42.203 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:09:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:09:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:43.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:43 compute-1 ceph-mon[79167]: pgmap v705: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:09:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:09:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:44.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0.
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:09:44.185879) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090984185930, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1297, "num_deletes": 250, "total_data_size": 3195446, "memory_usage": 3259680, "flush_reason": "Manual Compaction"}
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090984198670, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 1331552, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23106, "largest_seqno": 24398, "table_properties": {"data_size": 1327081, "index_size": 1995, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11513, "raw_average_key_size": 20, "raw_value_size": 1317447, "raw_average_value_size": 2340, "num_data_blocks": 86, "num_entries": 563, "num_filter_entries": 563, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760090877, "oldest_key_time": 1760090877, "file_creation_time": 1760090984, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 12931 microseconds, and 7547 cpu microseconds.
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:09:44.198805) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 1331552 bytes OK
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:09:44.198858) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:09:44.199884) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:09:44.199902) EVENT_LOG_v1 {"time_micros": 1760090984199895, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:09:44.199923) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 3189273, prev total WAL file size 3189273, number of live WAL files 2.
Oct 10 10:09:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:09:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 10:09:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:09:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 10:09:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 10:09:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 10:09:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 10:09:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:09:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:09:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:09:44.201269) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353033' seq:72057594037927935, type:22 .. '6D67727374617400373534' seq:0, type:0; will stop at (end)
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(1300KB)], [42(14MB)]
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090984201302, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 16053124, "oldest_snapshot_seqno": -1}
Oct 10 10:09:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 10:09:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:09:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 10:09:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 10:09:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 10:09:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 10:09:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 10:09:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 10:09:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 10:09:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 10:09:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 10:09:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 10:09:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 10:09:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 10:09:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 10:09:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 10:09:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 5513 keys, 12707796 bytes, temperature: kUnknown
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090984270652, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 12707796, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12672046, "index_size": 20856, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13829, "raw_key_size": 138924, "raw_average_key_size": 25, "raw_value_size": 12573408, "raw_average_value_size": 2280, "num_data_blocks": 855, "num_entries": 5513, "num_filter_entries": 5513, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760090984, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:09:44.271569) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 12707796 bytes
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:09:44.273427) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 229.4 rd, 181.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 14.0 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(21.6) write-amplify(9.5) OK, records in: 5984, records dropped: 471 output_compression: NoCompression
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:09:44.273467) EVENT_LOG_v1 {"time_micros": 1760090984273453, "job": 24, "event": "compaction_finished", "compaction_time_micros": 69980, "compaction_time_cpu_micros": 28714, "output_level": 6, "num_output_files": 1, "total_output_size": 12707796, "num_input_records": 5984, "num_output_records": 5513, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090984274075, "job": 24, "event": "table_file_deletion", "file_number": 44}
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760090984277104, "job": 24, "event": "table_file_deletion", "file_number": 42}
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:09:44.201215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:09:44.277169) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:09:44.277175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:09:44.277177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:09:44.277179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:09:44 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:09:44.277182) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:09:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a14000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:44 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a14000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:45 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ec000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:45.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:46.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:46 compute-1 ceph-mon[79167]: pgmap v706: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:09:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:09:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - - [10/Oct/2025:10:09:46.574 +0000] "GET /swift/info HTTP/1.1" 200 539 - "python-urllib3/1.26.5" - latency=0.001000026s
Oct 10 10:09:46 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:46 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a10001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:46 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:46 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a10001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:47 compute-1 nova_compute[235132]: 2025-10-10 10:09:47.047 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:09:47 compute-1 nova_compute[235132]: 2025-10-10 10:09:47.048 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:09:47 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:09:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:09:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100947 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:09:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:47 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a14002070 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:09:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:47.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:09:47 compute-1 podman[237878]: 2025-10-10 10:09:47.997148433 +0000 UTC m=+0.085482300 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, org.label-schema.schema-version=1.0)
Oct 10 10:09:47 compute-1 podman[237879]: 2025-10-10 10:09:47.997184734 +0000 UTC m=+0.090198348 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 10 10:09:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:48.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:48 compute-1 nova_compute[235132]: 2025-10-10 10:09:48.038 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:09:48 compute-1 nova_compute[235132]: 2025-10-10 10:09:48.066 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:09:48 compute-1 nova_compute[235132]: 2025-10-10 10:09:48.067 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:09:48 compute-1 nova_compute[235132]: 2025-10-10 10:09:48.100 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:09:48 compute-1 nova_compute[235132]: 2025-10-10 10:09:48.101 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:09:48 compute-1 nova_compute[235132]: 2025-10-10 10:09:48.102 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:09:48 compute-1 nova_compute[235132]: 2025-10-10 10:09:48.102 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:09:48 compute-1 nova_compute[235132]: 2025-10-10 10:09:48.103 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:09:48 compute-1 ceph-mon[79167]: pgmap v707: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:09:48 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/177719747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:09:48 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:09:48 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2615600125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:09:48 compute-1 nova_compute[235132]: 2025-10-10 10:09:48.588 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:09:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:48 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89ec0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:48 compute-1 nova_compute[235132]: 2025-10-10 10:09:48.779 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:09:48 compute-1 nova_compute[235132]: 2025-10-10 10:09:48.781 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=5227MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:09:48 compute-1 nova_compute[235132]: 2025-10-10 10:09:48.782 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:09:48 compute-1 nova_compute[235132]: 2025-10-10 10:09:48.782 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:09:48 compute-1 nova_compute[235132]: 2025-10-10 10:09:48.866 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:09:48 compute-1 nova_compute[235132]: 2025-10-10 10:09:48.867 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:09:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:48 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f89f0000ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:09:48 compute-1 nova_compute[235132]: 2025-10-10 10:09:48.881 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:09:49 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2615600125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:09:49 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2916280922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:09:49 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:09:49 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4222935933' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:09:49 compute-1 nova_compute[235132]: 2025-10-10 10:09:49.346 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:09:49 compute-1 nova_compute[235132]: 2025-10-10 10:09:49.354 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:09:49 compute-1 nova_compute[235132]: 2025-10-10 10:09:49.373 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:09:49 compute-1 nova_compute[235132]: 2025-10-10 10:09:49.376 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:09:49 compute-1 nova_compute[235132]: 2025-10-10 10:09:49.376 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:09:49 compute-1 kernel: ganesha.nfsd[237863]: segfault at 50 ip 00007f8abd3e232e sp 00007f8a7fffe210 error 4 in libntirpc.so.5.8[7f8abd3c7000+2c000] likely on CPU 7 (core 0, socket 7)
Oct 10 10:09:49 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 10:09:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[237769]: 10/10/2025 10:09:49 : epoch 68e8db5c : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8a100029b0 fd 39 proxy ignored for local
Oct 10 10:09:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:09:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:49.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:09:49 compute-1 systemd[1]: Started Process Core Dump (PID 237964/UID 0).
Oct 10 10:09:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:09:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:50.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:09:50 compute-1 podman[237966]: 2025-10-10 10:09:50.038228116 +0000 UTC m=+0.135362207 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:09:50 compute-1 ceph-mon[79167]: pgmap v708: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:09:50 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/4222935933' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:09:50 compute-1 nova_compute[235132]: 2025-10-10 10:09:50.354 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:09:50 compute-1 nova_compute[235132]: 2025-10-10 10:09:50.354 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:09:50 compute-1 nova_compute[235132]: 2025-10-10 10:09:50.355 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:09:50 compute-1 nova_compute[235132]: 2025-10-10 10:09:50.368 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:09:50 compute-1 nova_compute[235132]: 2025-10-10 10:09:50.369 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:09:50 compute-1 nova_compute[235132]: 2025-10-10 10:09:50.370 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:09:50 compute-1 nova_compute[235132]: 2025-10-10 10:09:50.370 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:09:50 compute-1 nova_compute[235132]: 2025-10-10 10:09:50.370 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:09:50 compute-1 nova_compute[235132]: 2025-10-10 10:09:50.370 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:09:50 compute-1 systemd-coredump[237965]: Process 237773 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 42:
                                                    #0  0x00007f8abd3e232e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 10 10:09:51 compute-1 systemd[1]: systemd-coredump@11-237964-0.service: Deactivated successfully.
Oct 10 10:09:51 compute-1 systemd[1]: systemd-coredump@11-237964-0.service: Consumed 1.336s CPU time.
Oct 10 10:09:51 compute-1 podman[237996]: 2025-10-10 10:09:51.169253581 +0000 UTC m=+0.026298641 container died bb8f660440db20e3dee90d5ebe6f94cf8960586a923c3f7c6dcbc88c1999fc81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:09:51 compute-1 systemd[1]: var-lib-containers-storage-overlay-22e0b4c544a8cdc3e9d0224c9c86fb6c5d3c39a448c6f94c436a9a9058981680-merged.mount: Deactivated successfully.
Oct 10 10:09:51 compute-1 podman[237996]: 2025-10-10 10:09:51.22550436 +0000 UTC m=+0.082549440 container remove bb8f660440db20e3dee90d5ebe6f94cf8960586a923c3f7c6dcbc88c1999fc81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Oct 10 10:09:51 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Main process exited, code=exited, status=139/n/a
Oct 10 10:09:51 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2285751282' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:09:51 compute-1 ceph-mon[79167]: pgmap v709: 353 pgs: 353 active+clean; 458 KiB data, 157 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Oct 10 10:09:51 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2966347030' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:09:51 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e144 e144: 3 total, 3 up, 3 in
Oct 10 10:09:51 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Failed with result 'exit-code'.
Oct 10 10:09:51 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 1.767s CPU time.
Oct 10 10:09:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:09:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:51.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:09:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:09:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:52.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:09:52 compute-1 ceph-mon[79167]: osdmap e144: 3 total, 3 up, 3 in
Oct 10 10:09:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e145 e145: 3 total, 3 up, 3 in
Oct 10 10:09:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:09:53 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e146 e146: 3 total, 3 up, 3 in
Oct 10 10:09:53 compute-1 ceph-mon[79167]: osdmap e145: 3 total, 3 up, 3 in
Oct 10 10:09:53 compute-1 ceph-mon[79167]: pgmap v712: 353 pgs: 353 active+clean; 8.4 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 1.0 MiB/s wr, 10 op/s
Oct 10 10:09:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:53.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:09:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:54.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:09:54 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e147 e147: 3 total, 3 up, 3 in
Oct 10 10:09:54 compute-1 ceph-mon[79167]: osdmap e146: 3 total, 3 up, 3 in
Oct 10 10:09:54 compute-1 ceph-mon[79167]: osdmap e147: 3 total, 3 up, 3 in
Oct 10 10:09:55 compute-1 ceph-mon[79167]: pgmap v715: 353 pgs: 353 active+clean; 8.4 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 20 op/s
Oct 10 10:09:55 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e148 e148: 3 total, 3 up, 3 in
Oct 10 10:09:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/100955 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:09:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:55.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:09:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:56.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:09:56 compute-1 ceph-mon[79167]: osdmap e148: 3 total, 3 up, 3 in
Oct 10 10:09:56 compute-1 sudo[238043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:09:57 compute-1 sudo[238043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:09:57 compute-1 sudo[238043]: pam_unix(sudo:session): session closed for user root
Oct 10 10:09:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:09:57 compute-1 ceph-mon[79167]: pgmap v717: 353 pgs: 353 active+clean; 8.4 MiB data, 165 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.8 MiB/s wr, 17 op/s
Oct 10 10:09:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:57.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:09:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:09:58.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:09:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:09:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:09:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:09:59.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:09:59 compute-1 ceph-mon[79167]: pgmap v718: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 5.5 MiB/s wr, 50 op/s
Oct 10 10:10:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:00.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:00 compute-1 ceph-mon[79167]: overall HEALTH_OK
Oct 10 10:10:01 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Scheduled restart job, restart counter is at 12.
Oct 10 10:10:01 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:10:01 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 1.767s CPU time.
Oct 10 10:10:01 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 10:10:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:01.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:01 compute-1 ceph-mon[79167]: pgmap v719: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 4.4 MiB/s wr, 40 op/s
Oct 10 10:10:01 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:10:01 compute-1 podman[238123]: 2025-10-10 10:10:01.947220699 +0000 UTC m=+0.049698393 container create 6546b2fcd1fe6d157439251f6fbf77cef47e24b9f982b7fd6618f23cf4621080 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1)
Oct 10 10:10:01 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6db3e4192f921f61bedae65edfc04d05878ec5c3891f666841a8bdf974350fc/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 10:10:01 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6db3e4192f921f61bedae65edfc04d05878ec5c3891f666841a8bdf974350fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:10:01 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6db3e4192f921f61bedae65edfc04d05878ec5c3891f666841a8bdf974350fc/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:10:01 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6db3e4192f921f61bedae65edfc04d05878ec5c3891f666841a8bdf974350fc/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.mssvzx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:10:02 compute-1 podman[238123]: 2025-10-10 10:10:02.014055894 +0000 UTC m=+0.116533618 container init 6546b2fcd1fe6d157439251f6fbf77cef47e24b9f982b7fd6618f23cf4621080 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Oct 10 10:10:02 compute-1 podman[238123]: 2025-10-10 10:10:01.924050154 +0000 UTC m=+0.026527888 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:10:02 compute-1 podman[238123]: 2025-10-10 10:10:02.020624581 +0000 UTC m=+0.123102275 container start 6546b2fcd1fe6d157439251f6fbf77cef47e24b9f982b7fd6618f23cf4621080 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:10:02 compute-1 bash[238123]: 6546b2fcd1fe6d157439251f6fbf77cef47e24b9f982b7fd6618f23cf4621080
Oct 10 10:10:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:02 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 10:10:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:02 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 10:10:02 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:10:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:02.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:02 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 10:10:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:02 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 10:10:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:02 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 10:10:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:02 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 10:10:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:02 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 10:10:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:02 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:10:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:10:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:03.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:03 compute-1 ceph-mon[79167]: pgmap v720: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 3.8 MiB/s wr, 36 op/s
Oct 10 10:10:04 compute-1 podman[238181]: 2025-10-10 10:10:04.006688049 +0000 UTC m=+0.105665914 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 10 10:10:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:04.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:05.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:05 compute-1 ceph-mon[79167]: pgmap v721: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.3 MiB/s wr, 31 op/s
Oct 10 10:10:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:06.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:07 compute-1 sudo[238201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:10:07 compute-1 sudo[238201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:10:07 compute-1 sudo[238201]: pam_unix(sudo:session): session closed for user root
Oct 10 10:10:07 compute-1 sudo[238226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:10:07 compute-1 sudo[238226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:10:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:10:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:10:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:07.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:10:07 compute-1 sudo[238226]: pam_unix(sudo:session): session closed for user root
Oct 10 10:10:07 compute-1 ceph-mon[79167]: pgmap v722: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.9 MiB/s wr, 27 op/s
Oct 10 10:10:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:08.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:08 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:10:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:08 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:10:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:09.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:09 compute-1 ceph-mon[79167]: pgmap v723: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.7 MiB/s wr, 27 op/s
Oct 10 10:10:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:10.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:11 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:10:11 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:10:11 compute-1 ceph-mon[79167]: pgmap v724: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Oct 10 10:10:11 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:10:11 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:10:11 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:10:11 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:10:11 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:10:11 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:10:11 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:10:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:11.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:12.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:10:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:13.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:13 compute-1 ceph-mon[79167]: pgmap v725: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 10 10:10:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:14.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:10:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 10 10:10:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 10 10:10:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 10 10:10:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 10 10:10:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 10 10:10:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 10 10:10:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:10:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:10:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:10:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 10 10:10:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 10 10:10:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 10 10:10:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 10 10:10:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 10 10:10:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 10 10:10:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 10 10:10:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 10 10:10:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 10 10:10:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 10 10:10:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 10 10:10:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 10 10:10:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 10 10:10:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 10 10:10:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 10:10:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 10 10:10:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 10 10:10:14 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:14.637 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:10:14 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:14.638 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 10 10:10:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f534c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53400014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:15 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:15.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:15 compute-1 ceph-mon[79167]: pgmap v726: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:10:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:16.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:16 compute-1 sudo[238302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:10:16 compute-1 sudo[238302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:10:16 compute-1 sudo[238302]: pam_unix(sudo:session): session closed for user root
Oct 10 10:10:16 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:16 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:16 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:16 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:17 compute-1 sudo[238327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:10:17 compute-1 sudo[238327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:10:17 compute-1 sudo[238327]: pam_unix(sudo:session): session closed for user root
Oct 10 10:10:17 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:10:17 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:10:17 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:10:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:10:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/101017 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:10:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:17 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53400021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:10:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:17.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:10:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:18.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:18 compute-1 ceph-mon[79167]: pgmap v727: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:10:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:18 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53280016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:18 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53200016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:18 compute-1 podman[238353]: 2025-10-10 10:10:18.960729187 +0000 UTC m=+0.058717236 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:10:18 compute-1 podman[238354]: 2025-10-10 10:10:18.969023281 +0000 UTC m=+0.067170395 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0.
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:10:19.204184) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091019204211, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 711, "num_deletes": 257, "total_data_size": 1346065, "memory_usage": 1366672, "flush_reason": "Manual Compaction"}
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091019214635, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 870498, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24403, "largest_seqno": 25109, "table_properties": {"data_size": 867026, "index_size": 1316, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7925, "raw_average_key_size": 18, "raw_value_size": 859782, "raw_average_value_size": 2008, "num_data_blocks": 58, "num_entries": 428, "num_filter_entries": 428, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760090985, "oldest_key_time": 1760090985, "file_creation_time": 1760091019, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 10509 microseconds, and 4385 cpu microseconds.
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:10:19.214685) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 870498 bytes OK
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:10:19.214708) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:10:19.220486) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:10:19.220510) EVENT_LOG_v1 {"time_micros": 1760091019220504, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:10:19.220532) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 1342151, prev total WAL file size 1342151, number of live WAL files 2.
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:10:19.221058) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323530' seq:72057594037927935, type:22 .. '6C6F676D00353033' seq:0, type:0; will stop at (end)
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(850KB)], [45(12MB)]
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091019221094, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 13578294, "oldest_snapshot_seqno": -1}
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 5410 keys, 13423246 bytes, temperature: kUnknown
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091019278925, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 13423246, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13387091, "index_size": 21517, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13573, "raw_key_size": 138016, "raw_average_key_size": 25, "raw_value_size": 13289148, "raw_average_value_size": 2456, "num_data_blocks": 879, "num_entries": 5410, "num_filter_entries": 5410, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760091019, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:10:19.279155) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 13423246 bytes
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:10:19.280700) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 234.4 rd, 231.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 12.1 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(31.0) write-amplify(15.4) OK, records in: 5941, records dropped: 531 output_compression: NoCompression
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:10:19.280715) EVENT_LOG_v1 {"time_micros": 1760091019280708, "job": 26, "event": "compaction_finished", "compaction_time_micros": 57917, "compaction_time_cpu_micros": 30449, "output_level": 6, "num_output_files": 1, "total_output_size": 13423246, "num_input_records": 5941, "num_output_records": 5410, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091019281028, "job": 26, "event": "table_file_deletion", "file_number": 47}
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091019283181, "job": 26, "event": "table_file_deletion", "file_number": 45}
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:10:19.221015) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:10:19.283407) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:10:19.283416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:10:19.283417) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:10:19.283426) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:10:19 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:10:19.283428) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:10:19 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:19.640 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ee0899c1-415d-4aa8-abe8-1240b4e8bf2c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:10:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:19 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:19.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:20.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:20 compute-1 ceph-mon[79167]: pgmap v728: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Oct 10 10:10:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:20 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53400021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:20 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53280016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:21 compute-1 podman[238391]: 2025-10-10 10:10:21.041559894 +0000 UTC m=+0.136983491 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 10 10:10:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:21 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53200016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:21.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:22.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:22 compute-1 ceph-mon[79167]: pgmap v729: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:10:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:10:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:22 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:22 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53400021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:23 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53280016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:10:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:23.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:10:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:24.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:24 compute-1 ceph-mon[79167]: pgmap v730: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:10:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:24 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53200016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:24 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:25 compute-1 ceph-mon[79167]: pgmap v731: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:10:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:25 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53400021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:25.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:26.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:26 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/3994215839' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:10:26 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/3994215839' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:10:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:26 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:26 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:10:27 compute-1 ceph-mon[79167]: pgmap v732: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:10:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:27 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:27.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:10:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:28.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:10:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:28 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53400021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:28 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:29 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:29.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:29 compute-1 ceph-mon[79167]: pgmap v733: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:10:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:30.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:30 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:30 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:30 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:30 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53400021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:31 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:31.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:31 compute-1 ceph-mon[79167]: pgmap v734: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:10:31 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:10:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:32.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:32 compute-1 nova_compute[235132]: 2025-10-10 10:10:32.344 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "b8379f65-91e0-45a5-a245-a1bc27260f20" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:10:32 compute-1 nova_compute[235132]: 2025-10-10 10:10:32.344 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "b8379f65-91e0-45a5-a245-a1bc27260f20" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:10:32 compute-1 nova_compute[235132]: 2025-10-10 10:10:32.374 2 DEBUG nova.compute.manager [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 10 10:10:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:10:32 compute-1 nova_compute[235132]: 2025-10-10 10:10:32.491 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:10:32 compute-1 nova_compute[235132]: 2025-10-10 10:10:32.492 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:10:32 compute-1 nova_compute[235132]: 2025-10-10 10:10:32.499 2 DEBUG nova.virt.hardware [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 10 10:10:32 compute-1 nova_compute[235132]: 2025-10-10 10:10:32.500 2 INFO nova.compute.claims [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Claim successful on node compute-1.ctlplane.example.com
Oct 10 10:10:32 compute-1 nova_compute[235132]: 2025-10-10 10:10:32.633 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:10:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:32 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:32 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:33 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:10:33 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3328175243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:10:33 compute-1 nova_compute[235132]: 2025-10-10 10:10:33.030 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.397s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:10:33 compute-1 nova_compute[235132]: 2025-10-10 10:10:33.036 2 DEBUG nova.compute.provider_tree [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:10:33 compute-1 nova_compute[235132]: 2025-10-10 10:10:33.060 2 DEBUG nova.scheduler.client.report [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:10:33 compute-1 nova_compute[235132]: 2025-10-10 10:10:33.087 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:10:33 compute-1 nova_compute[235132]: 2025-10-10 10:10:33.087 2 DEBUG nova.compute.manager [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 10 10:10:33 compute-1 nova_compute[235132]: 2025-10-10 10:10:33.139 2 DEBUG nova.compute.manager [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 10 10:10:33 compute-1 nova_compute[235132]: 2025-10-10 10:10:33.140 2 DEBUG nova.network.neutron [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 10 10:10:33 compute-1 nova_compute[235132]: 2025-10-10 10:10:33.173 2 INFO nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 10 10:10:33 compute-1 nova_compute[235132]: 2025-10-10 10:10:33.201 2 DEBUG nova.compute.manager [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 10 10:10:33 compute-1 nova_compute[235132]: 2025-10-10 10:10:33.321 2 DEBUG nova.compute.manager [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 10 10:10:33 compute-1 nova_compute[235132]: 2025-10-10 10:10:33.323 2 DEBUG nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 10 10:10:33 compute-1 nova_compute[235132]: 2025-10-10 10:10:33.323 2 INFO nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Creating image(s)
Oct 10 10:10:33 compute-1 nova_compute[235132]: 2025-10-10 10:10:33.367 2 DEBUG nova.storage.rbd_utils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image b8379f65-91e0-45a5-a245-a1bc27260f20_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:10:33 compute-1 nova_compute[235132]: 2025-10-10 10:10:33.408 2 DEBUG nova.storage.rbd_utils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image b8379f65-91e0-45a5-a245-a1bc27260f20_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:10:33 compute-1 nova_compute[235132]: 2025-10-10 10:10:33.444 2 DEBUG nova.storage.rbd_utils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image b8379f65-91e0-45a5-a245-a1bc27260f20_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:10:33 compute-1 nova_compute[235132]: 2025-10-10 10:10:33.448 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:10:33 compute-1 nova_compute[235132]: 2025-10-10 10:10:33.449 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:10:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:33 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53400021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:33.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:33 compute-1 ceph-mon[79167]: pgmap v735: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Oct 10 10:10:33 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3328175243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:10:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:34.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:34 compute-1 nova_compute[235132]: 2025-10-10 10:10:34.153 2 DEBUG nova.virt.libvirt.imagebackend [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Image locations are: [{'url': 'rbd://21f084a3-af34-5230-afe4-ea5cd24a55f4/images/5ae78700-970d-45b4-a57d-978a054c7519/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://21f084a3-af34-5230-afe4-ea5cd24a55f4/images/5ae78700-970d-45b4-a57d-978a054c7519/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 10 10:10:34 compute-1 nova_compute[235132]: 2025-10-10 10:10:34.532 2 WARNING oslo_policy.policy [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Oct 10 10:10:34 compute-1 nova_compute[235132]: 2025-10-10 10:10:34.533 2 WARNING oslo_policy.policy [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Oct 10 10:10:34 compute-1 nova_compute[235132]: 2025-10-10 10:10:34.540 2 DEBUG nova.policy [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7956778c03764aaf8906c9b435337976', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 10 10:10:34 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:34 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:34 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:34 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:34 compute-1 podman[238501]: 2025-10-10 10:10:34.96157498 +0000 UTC m=+0.061774280 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:10:35 compute-1 nova_compute[235132]: 2025-10-10 10:10:35.256 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:10:35 compute-1 nova_compute[235132]: 2025-10-10 10:10:35.339 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1.part --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:10:35 compute-1 nova_compute[235132]: 2025-10-10 10:10:35.340 2 DEBUG nova.virt.images [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] 5ae78700-970d-45b4-a57d-978a054c7519 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Oct 10 10:10:35 compute-1 nova_compute[235132]: 2025-10-10 10:10:35.341 2 DEBUG nova.privsep.utils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 10 10:10:35 compute-1 nova_compute[235132]: 2025-10-10 10:10:35.342 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1.part /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:10:35 compute-1 nova_compute[235132]: 2025-10-10 10:10:35.524 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1.part /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1.converted" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:10:35 compute-1 nova_compute[235132]: 2025-10-10 10:10:35.533 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:10:35 compute-1 nova_compute[235132]: 2025-10-10 10:10:35.613 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1.converted --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:10:35 compute-1 nova_compute[235132]: 2025-10-10 10:10:35.615 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:10:35 compute-1 nova_compute[235132]: 2025-10-10 10:10:35.659 2 DEBUG nova.storage.rbd_utils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image b8379f65-91e0-45a5-a245-a1bc27260f20_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:10:35 compute-1 nova_compute[235132]: 2025-10-10 10:10:35.664 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 b8379f65-91e0-45a5-a245-a1bc27260f20_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:10:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:35 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:35.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:35 compute-1 ceph-mon[79167]: pgmap v736: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:10:35 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e149 e149: 3 total, 3 up, 3 in
Oct 10 10:10:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:36.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:36 compute-1 nova_compute[235132]: 2025-10-10 10:10:36.570 2 DEBUG nova.network.neutron [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Successfully created port: 3281ffe2-3fe8-4217-bcda-e7f8c55f5dae _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 10 10:10:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:36 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53400021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:36 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:36 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e150 e150: 3 total, 3 up, 3 in
Oct 10 10:10:37 compute-1 ceph-mon[79167]: osdmap e149: 3 total, 3 up, 3 in
Oct 10 10:10:37 compute-1 sudo[238573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:10:37 compute-1 sudo[238573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:10:37 compute-1 sudo[238573]: pam_unix(sudo:session): session closed for user root
Oct 10 10:10:37 compute-1 nova_compute[235132]: 2025-10-10 10:10:37.332 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 b8379f65-91e0-45a5-a245-a1bc27260f20_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.667s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:10:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:10:37 compute-1 nova_compute[235132]: 2025-10-10 10:10:37.448 2 DEBUG nova.storage.rbd_utils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] resizing rbd image b8379f65-91e0-45a5-a245-a1bc27260f20_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 10 10:10:37 compute-1 nova_compute[235132]: 2025-10-10 10:10:37.595 2 DEBUG nova.objects.instance [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'migration_context' on Instance uuid b8379f65-91e0-45a5-a245-a1bc27260f20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:10:37 compute-1 nova_compute[235132]: 2025-10-10 10:10:37.622 2 DEBUG nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 10 10:10:37 compute-1 nova_compute[235132]: 2025-10-10 10:10:37.623 2 DEBUG nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Ensure instance console log exists: /var/lib/nova/instances/b8379f65-91e0-45a5-a245-a1bc27260f20/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 10 10:10:37 compute-1 nova_compute[235132]: 2025-10-10 10:10:37.623 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:10:37 compute-1 nova_compute[235132]: 2025-10-10 10:10:37.624 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:10:37 compute-1 nova_compute[235132]: 2025-10-10 10:10:37.624 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:10:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:37 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:37.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:38 compute-1 ceph-mon[79167]: pgmap v738: 353 pgs: 353 active+clean; 41 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s
Oct 10 10:10:38 compute-1 ceph-mon[79167]: osdmap e150: 3 total, 3 up, 3 in
Oct 10 10:10:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:38.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:38 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:38 compute-1 nova_compute[235132]: 2025-10-10 10:10:38.717 2 DEBUG nova.network.neutron [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Successfully updated port: 3281ffe2-3fe8-4217-bcda-e7f8c55f5dae _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 10 10:10:38 compute-1 nova_compute[235132]: 2025-10-10 10:10:38.742 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:10:38 compute-1 nova_compute[235132]: 2025-10-10 10:10:38.742 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquired lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:10:38 compute-1 nova_compute[235132]: 2025-10-10 10:10:38.742 2 DEBUG nova.network.neutron [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 10 10:10:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/101038 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:10:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:38 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53400021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:39 compute-1 nova_compute[235132]: 2025-10-10 10:10:39.085 2 DEBUG nova.network.neutron [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 10 10:10:39 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 e151: 3 total, 3 up, 3 in
Oct 10 10:10:39 compute-1 nova_compute[235132]: 2025-10-10 10:10:39.318 2 DEBUG nova.compute.manager [req-0204d1cb-d5bb-4cb6-b0e6-52e654ea2aa7 req-4a3e6588-7508-4582-b122-0e648fd3aa5c 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Received event network-changed-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:10:39 compute-1 nova_compute[235132]: 2025-10-10 10:10:39.318 2 DEBUG nova.compute.manager [req-0204d1cb-d5bb-4cb6-b0e6-52e654ea2aa7 req-4a3e6588-7508-4582-b122-0e648fd3aa5c 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Refreshing instance network info cache due to event network-changed-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 10 10:10:39 compute-1 nova_compute[235132]: 2025-10-10 10:10:39.319 2 DEBUG oslo_concurrency.lockutils [req-0204d1cb-d5bb-4cb6-b0e6-52e654ea2aa7 req-4a3e6588-7508-4582-b122-0e648fd3aa5c 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:10:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:39 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:39.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:10:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:40.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:10:40 compute-1 nova_compute[235132]: 2025-10-10 10:10:40.200 2 DEBUG nova.network.neutron [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Updating instance_info_cache with network_info: [{"id": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "address": "fa:16:3e:f6:78:9d", "network": {"id": "bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11", "bridge": "br-int", "label": "tempest-network-smoke--1013272438", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3281ffe2-3f", "ovs_interfaceid": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:10:40 compute-1 ceph-mon[79167]: pgmap v740: 353 pgs: 353 active+clean; 88 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 42 op/s
Oct 10 10:10:40 compute-1 ceph-mon[79167]: osdmap e151: 3 total, 3 up, 3 in
Oct 10 10:10:40 compute-1 nova_compute[235132]: 2025-10-10 10:10:40.223 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Releasing lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:10:40 compute-1 nova_compute[235132]: 2025-10-10 10:10:40.223 2 DEBUG nova.compute.manager [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Instance network_info: |[{"id": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "address": "fa:16:3e:f6:78:9d", "network": {"id": "bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11", "bridge": "br-int", "label": "tempest-network-smoke--1013272438", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3281ffe2-3f", "ovs_interfaceid": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 10 10:10:40 compute-1 nova_compute[235132]: 2025-10-10 10:10:40.224 2 DEBUG oslo_concurrency.lockutils [req-0204d1cb-d5bb-4cb6-b0e6-52e654ea2aa7 req-4a3e6588-7508-4582-b122-0e648fd3aa5c 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:10:40 compute-1 nova_compute[235132]: 2025-10-10 10:10:40.225 2 DEBUG nova.network.neutron [req-0204d1cb-d5bb-4cb6-b0e6-52e654ea2aa7 req-4a3e6588-7508-4582-b122-0e648fd3aa5c 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Refreshing network info cache for port 3281ffe2-3fe8-4217-bcda-e7f8c55f5dae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 10 10:10:40 compute-1 nova_compute[235132]: 2025-10-10 10:10:40.230 2 DEBUG nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Start _get_guest_xml network_info=[{"id": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "address": "fa:16:3e:f6:78:9d", "network": {"id": "bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11", "bridge": "br-int", "label": "tempest-network-smoke--1013272438", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3281ffe2-3f", "ovs_interfaceid": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-10T10:09:50Z,direct_url=<?>,disk_format='qcow2',id=5ae78700-970d-45b4-a57d-978a054c7519,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ec962e275689437d80680ff3ea69c852',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-10T10:09:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'boot_index': 0, 'guest_format': None, 'image_id': '5ae78700-970d-45b4-a57d-978a054c7519'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 10 10:10:40 compute-1 nova_compute[235132]: 2025-10-10 10:10:40.238 2 WARNING nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:10:40 compute-1 nova_compute[235132]: 2025-10-10 10:10:40.255 2 DEBUG nova.virt.libvirt.host [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 10 10:10:40 compute-1 nova_compute[235132]: 2025-10-10 10:10:40.256 2 DEBUG nova.virt.libvirt.host [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 10 10:10:40 compute-1 nova_compute[235132]: 2025-10-10 10:10:40.260 2 DEBUG nova.virt.libvirt.host [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 10 10:10:40 compute-1 nova_compute[235132]: 2025-10-10 10:10:40.261 2 DEBUG nova.virt.libvirt.host [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 10 10:10:40 compute-1 nova_compute[235132]: 2025-10-10 10:10:40.261 2 DEBUG nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 10 10:10:40 compute-1 nova_compute[235132]: 2025-10-10 10:10:40.261 2 DEBUG nova.virt.hardware [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-10T10:09:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='00373e71-6208-4238-ad85-db0452c53bc6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-10T10:09:50Z,direct_url=<?>,disk_format='qcow2',id=5ae78700-970d-45b4-a57d-978a054c7519,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ec962e275689437d80680ff3ea69c852',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-10T10:09:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 10 10:10:40 compute-1 nova_compute[235132]: 2025-10-10 10:10:40.262 2 DEBUG nova.virt.hardware [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 10 10:10:40 compute-1 nova_compute[235132]: 2025-10-10 10:10:40.262 2 DEBUG nova.virt.hardware [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 10 10:10:40 compute-1 nova_compute[235132]: 2025-10-10 10:10:40.262 2 DEBUG nova.virt.hardware [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 10 10:10:40 compute-1 nova_compute[235132]: 2025-10-10 10:10:40.262 2 DEBUG nova.virt.hardware [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 10 10:10:40 compute-1 nova_compute[235132]: 2025-10-10 10:10:40.263 2 DEBUG nova.virt.hardware [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 10 10:10:40 compute-1 nova_compute[235132]: 2025-10-10 10:10:40.263 2 DEBUG nova.virt.hardware [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 10 10:10:40 compute-1 nova_compute[235132]: 2025-10-10 10:10:40.263 2 DEBUG nova.virt.hardware [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 10 10:10:40 compute-1 nova_compute[235132]: 2025-10-10 10:10:40.263 2 DEBUG nova.virt.hardware [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 10 10:10:40 compute-1 nova_compute[235132]: 2025-10-10 10:10:40.264 2 DEBUG nova.virt.hardware [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 10 10:10:40 compute-1 nova_compute[235132]: 2025-10-10 10:10:40.264 2 DEBUG nova.virt.hardware [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 10 10:10:40 compute-1 nova_compute[235132]: 2025-10-10 10:10:40.270 2 DEBUG nova.privsep.utils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 10 10:10:40 compute-1 nova_compute[235132]: 2025-10-10 10:10:40.270 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:10:40 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:40 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:40 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 10 10:10:40 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/809446276' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:10:40 compute-1 nova_compute[235132]: 2025-10-10 10:10:40.696 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:10:40 compute-1 nova_compute[235132]: 2025-10-10 10:10:40.733 2 DEBUG nova.storage.rbd_utils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image b8379f65-91e0-45a5-a245-a1bc27260f20_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:10:40 compute-1 nova_compute[235132]: 2025-10-10 10:10:40.740 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:10:40 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:40 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:41 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 10 10:10:41 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1280200693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:10:41 compute-1 nova_compute[235132]: 2025-10-10 10:10:41.219 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:10:41 compute-1 nova_compute[235132]: 2025-10-10 10:10:41.222 2 DEBUG nova.virt.libvirt.vif [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-10T10:10:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1823645149',display_name='tempest-TestNetworkBasicOps-server-1823645149',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1823645149',id=1,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH1cySxBL6pw+6qEpturfgqFpVsnU32fmvYm1ovqdR9d7Yu/HsSXnbP11SE0LsPImrqW3NM7Ipp+q9ZG2BlkPbNPH4TMiwgnLU7hJmzvd5980ZxncdeOwTfn8+UHeM5LSQ==',key_name='tempest-TestNetworkBasicOps-1606841299',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-riep0t81',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-10T10:10:33Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=b8379f65-91e0-45a5-a245-a1bc27260f20,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "address": "fa:16:3e:f6:78:9d", "network": {"id": "bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11", "bridge": "br-int", "label": "tempest-network-smoke--1013272438", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3281ffe2-3f", "ovs_interfaceid": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 10 10:10:41 compute-1 nova_compute[235132]: 2025-10-10 10:10:41.223 2 DEBUG nova.network.os_vif_util [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "address": "fa:16:3e:f6:78:9d", "network": {"id": "bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11", "bridge": "br-int", "label": "tempest-network-smoke--1013272438", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3281ffe2-3f", "ovs_interfaceid": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:10:41 compute-1 nova_compute[235132]: 2025-10-10 10:10:41.224 2 DEBUG nova.network.os_vif_util [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:78:9d,bridge_name='br-int',has_traffic_filtering=True,id=3281ffe2-3fe8-4217-bcda-e7f8c55f5dae,network=Network(bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3281ffe2-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:10:41 compute-1 nova_compute[235132]: 2025-10-10 10:10:41.228 2 DEBUG nova.objects.instance [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'pci_devices' on Instance uuid b8379f65-91e0-45a5-a245-a1bc27260f20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:10:41 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/809446276' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:10:41 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1280200693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:10:41 compute-1 nova_compute[235132]: 2025-10-10 10:10:41.247 2 DEBUG nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] End _get_guest_xml xml=<domain type="kvm">
Oct 10 10:10:41 compute-1 nova_compute[235132]:   <uuid>b8379f65-91e0-45a5-a245-a1bc27260f20</uuid>
Oct 10 10:10:41 compute-1 nova_compute[235132]:   <name>instance-00000001</name>
Oct 10 10:10:41 compute-1 nova_compute[235132]:   <memory>131072</memory>
Oct 10 10:10:41 compute-1 nova_compute[235132]:   <vcpu>1</vcpu>
Oct 10 10:10:41 compute-1 nova_compute[235132]:   <metadata>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 10:10:41 compute-1 nova_compute[235132]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:       <nova:name>tempest-TestNetworkBasicOps-server-1823645149</nova:name>
Oct 10 10:10:41 compute-1 nova_compute[235132]:       <nova:creationTime>2025-10-10 10:10:40</nova:creationTime>
Oct 10 10:10:41 compute-1 nova_compute[235132]:       <nova:flavor name="m1.nano">
Oct 10 10:10:41 compute-1 nova_compute[235132]:         <nova:memory>128</nova:memory>
Oct 10 10:10:41 compute-1 nova_compute[235132]:         <nova:disk>1</nova:disk>
Oct 10 10:10:41 compute-1 nova_compute[235132]:         <nova:swap>0</nova:swap>
Oct 10 10:10:41 compute-1 nova_compute[235132]:         <nova:ephemeral>0</nova:ephemeral>
Oct 10 10:10:41 compute-1 nova_compute[235132]:         <nova:vcpus>1</nova:vcpus>
Oct 10 10:10:41 compute-1 nova_compute[235132]:       </nova:flavor>
Oct 10 10:10:41 compute-1 nova_compute[235132]:       <nova:owner>
Oct 10 10:10:41 compute-1 nova_compute[235132]:         <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 10:10:41 compute-1 nova_compute[235132]:         <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 10:10:41 compute-1 nova_compute[235132]:       </nova:owner>
Oct 10 10:10:41 compute-1 nova_compute[235132]:       <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:       <nova:ports>
Oct 10 10:10:41 compute-1 nova_compute[235132]:         <nova:port uuid="3281ffe2-3fe8-4217-bcda-e7f8c55f5dae">
Oct 10 10:10:41 compute-1 nova_compute[235132]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:         </nova:port>
Oct 10 10:10:41 compute-1 nova_compute[235132]:       </nova:ports>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     </nova:instance>
Oct 10 10:10:41 compute-1 nova_compute[235132]:   </metadata>
Oct 10 10:10:41 compute-1 nova_compute[235132]:   <sysinfo type="smbios">
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <system>
Oct 10 10:10:41 compute-1 nova_compute[235132]:       <entry name="manufacturer">RDO</entry>
Oct 10 10:10:41 compute-1 nova_compute[235132]:       <entry name="product">OpenStack Compute</entry>
Oct 10 10:10:41 compute-1 nova_compute[235132]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 10:10:41 compute-1 nova_compute[235132]:       <entry name="serial">b8379f65-91e0-45a5-a245-a1bc27260f20</entry>
Oct 10 10:10:41 compute-1 nova_compute[235132]:       <entry name="uuid">b8379f65-91e0-45a5-a245-a1bc27260f20</entry>
Oct 10 10:10:41 compute-1 nova_compute[235132]:       <entry name="family">Virtual Machine</entry>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     </system>
Oct 10 10:10:41 compute-1 nova_compute[235132]:   </sysinfo>
Oct 10 10:10:41 compute-1 nova_compute[235132]:   <os>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <boot dev="hd"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <smbios mode="sysinfo"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:   </os>
Oct 10 10:10:41 compute-1 nova_compute[235132]:   <features>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <acpi/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <apic/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <vmcoreinfo/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:   </features>
Oct 10 10:10:41 compute-1 nova_compute[235132]:   <clock offset="utc">
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <timer name="pit" tickpolicy="delay"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <timer name="hpet" present="no"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:   </clock>
Oct 10 10:10:41 compute-1 nova_compute[235132]:   <cpu mode="host-model" match="exact">
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <topology sockets="1" cores="1" threads="1"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:   </cpu>
Oct 10 10:10:41 compute-1 nova_compute[235132]:   <devices>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <disk type="network" device="disk">
Oct 10 10:10:41 compute-1 nova_compute[235132]:       <driver type="raw" cache="none"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:       <source protocol="rbd" name="vms/b8379f65-91e0-45a5-a245-a1bc27260f20_disk">
Oct 10 10:10:41 compute-1 nova_compute[235132]:         <host name="192.168.122.100" port="6789"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:         <host name="192.168.122.102" port="6789"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:         <host name="192.168.122.101" port="6789"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:       </source>
Oct 10 10:10:41 compute-1 nova_compute[235132]:       <auth username="openstack">
Oct 10 10:10:41 compute-1 nova_compute[235132]:         <secret type="ceph" uuid="21f084a3-af34-5230-afe4-ea5cd24a55f4"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:       </auth>
Oct 10 10:10:41 compute-1 nova_compute[235132]:       <target dev="vda" bus="virtio"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     </disk>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <disk type="network" device="cdrom">
Oct 10 10:10:41 compute-1 nova_compute[235132]:       <driver type="raw" cache="none"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:       <source protocol="rbd" name="vms/b8379f65-91e0-45a5-a245-a1bc27260f20_disk.config">
Oct 10 10:10:41 compute-1 nova_compute[235132]:         <host name="192.168.122.100" port="6789"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:         <host name="192.168.122.102" port="6789"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:         <host name="192.168.122.101" port="6789"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:       </source>
Oct 10 10:10:41 compute-1 nova_compute[235132]:       <auth username="openstack">
Oct 10 10:10:41 compute-1 nova_compute[235132]:         <secret type="ceph" uuid="21f084a3-af34-5230-afe4-ea5cd24a55f4"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:       </auth>
Oct 10 10:10:41 compute-1 nova_compute[235132]:       <target dev="sda" bus="sata"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     </disk>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <interface type="ethernet">
Oct 10 10:10:41 compute-1 nova_compute[235132]:       <mac address="fa:16:3e:f6:78:9d"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:       <model type="virtio"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:       <driver name="vhost" rx_queue_size="512"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:       <mtu size="1442"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:       <target dev="tap3281ffe2-3f"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     </interface>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <serial type="pty">
Oct 10 10:10:41 compute-1 nova_compute[235132]:       <log file="/var/lib/nova/instances/b8379f65-91e0-45a5-a245-a1bc27260f20/console.log" append="off"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     </serial>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <video>
Oct 10 10:10:41 compute-1 nova_compute[235132]:       <model type="virtio"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     </video>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <input type="tablet" bus="usb"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <rng model="virtio">
Oct 10 10:10:41 compute-1 nova_compute[235132]:       <backend model="random">/dev/urandom</backend>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     </rng>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <controller type="usb" index="0"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     <memballoon model="virtio">
Oct 10 10:10:41 compute-1 nova_compute[235132]:       <stats period="10"/>
Oct 10 10:10:41 compute-1 nova_compute[235132]:     </memballoon>
Oct 10 10:10:41 compute-1 nova_compute[235132]:   </devices>
Oct 10 10:10:41 compute-1 nova_compute[235132]: </domain>
Oct 10 10:10:41 compute-1 nova_compute[235132]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 10 10:10:41 compute-1 nova_compute[235132]: 2025-10-10 10:10:41.248 2 DEBUG nova.compute.manager [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Preparing to wait for external event network-vif-plugged-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 10 10:10:41 compute-1 nova_compute[235132]: 2025-10-10 10:10:41.248 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:10:41 compute-1 nova_compute[235132]: 2025-10-10 10:10:41.249 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:10:41 compute-1 nova_compute[235132]: 2025-10-10 10:10:41.249 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:10:41 compute-1 nova_compute[235132]: 2025-10-10 10:10:41.250 2 DEBUG nova.virt.libvirt.vif [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-10T10:10:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1823645149',display_name='tempest-TestNetworkBasicOps-server-1823645149',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1823645149',id=1,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH1cySxBL6pw+6qEpturfgqFpVsnU32fmvYm1ovqdR9d7Yu/HsSXnbP11SE0LsPImrqW3NM7Ipp+q9ZG2BlkPbNPH4TMiwgnLU7hJmzvd5980ZxncdeOwTfn8+UHeM5LSQ==',key_name='tempest-TestNetworkBasicOps-1606841299',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-riep0t81',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-10T10:10:33Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=b8379f65-91e0-45a5-a245-a1bc27260f20,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "address": "fa:16:3e:f6:78:9d", "network": {"id": "bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11", "bridge": "br-int", "label": "tempest-network-smoke--1013272438", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3281ffe2-3f", "ovs_interfaceid": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 10 10:10:41 compute-1 nova_compute[235132]: 2025-10-10 10:10:41.251 2 DEBUG nova.network.os_vif_util [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "address": "fa:16:3e:f6:78:9d", "network": {"id": "bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11", "bridge": "br-int", "label": "tempest-network-smoke--1013272438", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3281ffe2-3f", "ovs_interfaceid": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:10:41 compute-1 nova_compute[235132]: 2025-10-10 10:10:41.252 2 DEBUG nova.network.os_vif_util [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:78:9d,bridge_name='br-int',has_traffic_filtering=True,id=3281ffe2-3fe8-4217-bcda-e7f8c55f5dae,network=Network(bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3281ffe2-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:10:41 compute-1 nova_compute[235132]: 2025-10-10 10:10:41.252 2 DEBUG os_vif [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:78:9d,bridge_name='br-int',has_traffic_filtering=True,id=3281ffe2-3fe8-4217-bcda-e7f8c55f5dae,network=Network(bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3281ffe2-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 10 10:10:41 compute-1 nova_compute[235132]: 2025-10-10 10:10:41.308 2 DEBUG ovsdbapp.backend.ovs_idl [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 10 10:10:41 compute-1 nova_compute[235132]: 2025-10-10 10:10:41.308 2 DEBUG ovsdbapp.backend.ovs_idl [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 10 10:10:41 compute-1 nova_compute[235132]: 2025-10-10 10:10:41.309 2 DEBUG ovsdbapp.backend.ovs_idl [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 10 10:10:41 compute-1 nova_compute[235132]: 2025-10-10 10:10:41.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 10 10:10:41 compute-1 nova_compute[235132]: 2025-10-10 10:10:41.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [POLLOUT] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:10:41 compute-1 nova_compute[235132]: 2025-10-10 10:10:41.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 10 10:10:41 compute-1 nova_compute[235132]: 2025-10-10 10:10:41.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:10:41 compute-1 nova_compute[235132]: 2025-10-10 10:10:41.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:10:41 compute-1 nova_compute[235132]: 2025-10-10 10:10:41.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:10:41 compute-1 nova_compute[235132]: 2025-10-10 10:10:41.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:10:41 compute-1 nova_compute[235132]: 2025-10-10 10:10:41.325 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:10:41 compute-1 nova_compute[235132]: 2025-10-10 10:10:41.326 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 10 10:10:41 compute-1 nova_compute[235132]: 2025-10-10 10:10:41.326 2 INFO oslo.privsep.daemon [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpy733erh3/privsep.sock']
Oct 10 10:10:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:41 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53400021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:41.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:42 compute-1 nova_compute[235132]: 2025-10-10 10:10:42.003 2 INFO oslo.privsep.daemon [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Spawned new privsep daemon via rootwrap
Oct 10 10:10:42 compute-1 nova_compute[235132]: 2025-10-10 10:10:41.867 521 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 10 10:10:42 compute-1 nova_compute[235132]: 2025-10-10 10:10:41.872 521 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 10 10:10:42 compute-1 nova_compute[235132]: 2025-10-10 10:10:41.874 521 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Oct 10 10:10:42 compute-1 nova_compute[235132]: 2025-10-10 10:10:41.875 521 INFO oslo.privsep.daemon [-] privsep daemon running as pid 521
Oct 10 10:10:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:42.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:42.204 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:10:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:42.205 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:10:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:42.205 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:10:42 compute-1 ceph-mon[79167]: pgmap v742: 353 pgs: 353 active+clean; 88 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.5 MiB/s wr, 56 op/s
Oct 10 10:10:42 compute-1 nova_compute[235132]: 2025-10-10 10:10:42.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:10:42 compute-1 nova_compute[235132]: 2025-10-10 10:10:42.339 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3281ffe2-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:10:42 compute-1 nova_compute[235132]: 2025-10-10 10:10:42.340 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3281ffe2-3f, col_values=(('external_ids', {'iface-id': '3281ffe2-3fe8-4217-bcda-e7f8c55f5dae', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:78:9d', 'vm-uuid': 'b8379f65-91e0-45a5-a245-a1bc27260f20'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:10:42 compute-1 NetworkManager[44982]: <info>  [1760091042.3441] manager: (tap3281ffe2-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Oct 10 10:10:42 compute-1 nova_compute[235132]: 2025-10-10 10:10:42.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:10:42 compute-1 nova_compute[235132]: 2025-10-10 10:10:42.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 10 10:10:42 compute-1 nova_compute[235132]: 2025-10-10 10:10:42.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:10:42 compute-1 nova_compute[235132]: 2025-10-10 10:10:42.354 2 INFO os_vif [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:78:9d,bridge_name='br-int',has_traffic_filtering=True,id=3281ffe2-3fe8-4217-bcda-e7f8c55f5dae,network=Network(bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3281ffe2-3f')
Oct 10 10:10:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:10:42 compute-1 nova_compute[235132]: 2025-10-10 10:10:42.418 2 DEBUG nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 10 10:10:42 compute-1 nova_compute[235132]: 2025-10-10 10:10:42.419 2 DEBUG nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 10 10:10:42 compute-1 nova_compute[235132]: 2025-10-10 10:10:42.420 2 DEBUG nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No VIF found with MAC fa:16:3e:f6:78:9d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 10 10:10:42 compute-1 nova_compute[235132]: 2025-10-10 10:10:42.421 2 INFO nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Using config drive
Oct 10 10:10:42 compute-1 nova_compute[235132]: 2025-10-10 10:10:42.462 2 DEBUG nova.storage.rbd_utils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image b8379f65-91e0-45a5-a245-a1bc27260f20_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:10:42 compute-1 nova_compute[235132]: 2025-10-10 10:10:42.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:10:42 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:42 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:42 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:42 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:43 compute-1 ceph-mon[79167]: pgmap v743: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.1 MiB/s wr, 59 op/s
Oct 10 10:10:43 compute-1 nova_compute[235132]: 2025-10-10 10:10:43.556 2 DEBUG nova.network.neutron [req-0204d1cb-d5bb-4cb6-b0e6-52e654ea2aa7 req-4a3e6588-7508-4582-b122-0e648fd3aa5c 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Updated VIF entry in instance network info cache for port 3281ffe2-3fe8-4217-bcda-e7f8c55f5dae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 10 10:10:43 compute-1 nova_compute[235132]: 2025-10-10 10:10:43.556 2 DEBUG nova.network.neutron [req-0204d1cb-d5bb-4cb6-b0e6-52e654ea2aa7 req-4a3e6588-7508-4582-b122-0e648fd3aa5c 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Updating instance_info_cache with network_info: [{"id": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "address": "fa:16:3e:f6:78:9d", "network": {"id": "bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11", "bridge": "br-int", "label": "tempest-network-smoke--1013272438", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3281ffe2-3f", "ovs_interfaceid": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:10:43 compute-1 nova_compute[235132]: 2025-10-10 10:10:43.576 2 DEBUG oslo_concurrency.lockutils [req-0204d1cb-d5bb-4cb6-b0e6-52e654ea2aa7 req-4a3e6588-7508-4582-b122-0e648fd3aa5c 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:10:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:43 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:10:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:43.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:10:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:44.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:44 compute-1 nova_compute[235132]: 2025-10-10 10:10:44.544 2 INFO nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Creating config drive at /var/lib/nova/instances/b8379f65-91e0-45a5-a245-a1bc27260f20/disk.config
Oct 10 10:10:44 compute-1 nova_compute[235132]: 2025-10-10 10:10:44.551 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b8379f65-91e0-45a5-a245-a1bc27260f20/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_w0m0kgf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:10:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:44 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53400021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:44 compute-1 nova_compute[235132]: 2025-10-10 10:10:44.702 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b8379f65-91e0-45a5-a245-a1bc27260f20/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_w0m0kgf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:10:44 compute-1 nova_compute[235132]: 2025-10-10 10:10:44.747 2 DEBUG nova.storage.rbd_utils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image b8379f65-91e0-45a5-a245-a1bc27260f20_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:10:44 compute-1 nova_compute[235132]: 2025-10-10 10:10:44.753 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b8379f65-91e0-45a5-a245-a1bc27260f20/disk.config b8379f65-91e0-45a5-a245-a1bc27260f20_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:10:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:44 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:44 compute-1 nova_compute[235132]: 2025-10-10 10:10:44.951 2 DEBUG oslo_concurrency.processutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b8379f65-91e0-45a5-a245-a1bc27260f20/disk.config b8379f65-91e0-45a5-a245-a1bc27260f20_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:10:44 compute-1 nova_compute[235132]: 2025-10-10 10:10:44.953 2 INFO nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Deleting local config drive /var/lib/nova/instances/b8379f65-91e0-45a5-a245-a1bc27260f20/disk.config because it was imported into RBD.
Oct 10 10:10:44 compute-1 systemd[1]: Starting libvirt secret daemon...
Oct 10 10:10:45 compute-1 systemd[1]: Started libvirt secret daemon.
Oct 10 10:10:45 compute-1 nova_compute[235132]: 2025-10-10 10:10:45.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:10:45 compute-1 nova_compute[235132]: 2025-10-10 10:10:45.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 10 10:10:45 compute-1 nova_compute[235132]: 2025-10-10 10:10:45.081 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 10 10:10:45 compute-1 nova_compute[235132]: 2025-10-10 10:10:45.082 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:10:45 compute-1 nova_compute[235132]: 2025-10-10 10:10:45.082 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 10 10:10:45 compute-1 nova_compute[235132]: 2025-10-10 10:10:45.104 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:10:45 compute-1 kernel: tun: Universal TUN/TAP device driver, 1.6
Oct 10 10:10:45 compute-1 kernel: tap3281ffe2-3f: entered promiscuous mode
Oct 10 10:10:45 compute-1 NetworkManager[44982]: <info>  [1760091045.1115] manager: (tap3281ffe2-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/26)
Oct 10 10:10:45 compute-1 ovn_controller[131749]: 2025-10-10T10:10:45Z|00027|binding|INFO|Claiming lport 3281ffe2-3fe8-4217-bcda-e7f8c55f5dae for this chassis.
Oct 10 10:10:45 compute-1 ovn_controller[131749]: 2025-10-10T10:10:45Z|00028|binding|INFO|3281ffe2-3fe8-4217-bcda-e7f8c55f5dae: Claiming fa:16:3e:f6:78:9d 10.100.0.6
Oct 10 10:10:45 compute-1 nova_compute[235132]: 2025-10-10 10:10:45.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:10:45 compute-1 nova_compute[235132]: 2025-10-10 10:10:45.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:10:45 compute-1 systemd-udevd[238834]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 10:10:45 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:45.169 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:78:9d 10.100.0.6'], port_security=['fa:16:3e:f6:78:9d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b8379f65-91e0-45a5-a245-a1bc27260f20', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9213b2d5-68f1-49a1-a3cf-ea56345963fa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4cf25de6-ad2e-407a-bd52-f4f32badc3ec, chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], logical_port=3281ffe2-3fe8-4217-bcda-e7f8c55f5dae) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:10:45 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:45.170 141156 INFO neutron.agent.ovn.metadata.agent [-] Port 3281ffe2-3fe8-4217-bcda-e7f8c55f5dae in datapath bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11 bound to our chassis
Oct 10 10:10:45 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:45.172 141156 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11
Oct 10 10:10:45 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:45.173 141156 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpyjp0jsis/privsep.sock']
Oct 10 10:10:45 compute-1 NetworkManager[44982]: <info>  [1760091045.1781] device (tap3281ffe2-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 10:10:45 compute-1 NetworkManager[44982]: <info>  [1760091045.1789] device (tap3281ffe2-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 10:10:45 compute-1 systemd-machined[191637]: New machine qemu-1-instance-00000001.
Oct 10 10:10:45 compute-1 nova_compute[235132]: 2025-10-10 10:10:45.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:10:45 compute-1 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Oct 10 10:10:45 compute-1 ovn_controller[131749]: 2025-10-10T10:10:45Z|00029|binding|INFO|Setting lport 3281ffe2-3fe8-4217-bcda-e7f8c55f5dae ovn-installed in OVS
Oct 10 10:10:45 compute-1 ovn_controller[131749]: 2025-10-10T10:10:45Z|00030|binding|INFO|Setting lport 3281ffe2-3fe8-4217-bcda-e7f8c55f5dae up in Southbound
Oct 10 10:10:45 compute-1 nova_compute[235132]: 2025-10-10 10:10:45.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:10:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:45 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:10:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:45.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:10:45 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:45.860 141156 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 10 10:10:45 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:45.861 141156 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpyjp0jsis/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 10 10:10:45 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:45.747 238898 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 10 10:10:45 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:45.752 238898 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 10 10:10:45 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:45.754 238898 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Oct 10 10:10:45 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:45.754 238898 INFO oslo.privsep.daemon [-] privsep daemon running as pid 238898
Oct 10 10:10:45 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:45.864 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[a8cda263-6ce8-4638-a73e-1265b72a5d22]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:10:45 compute-1 ceph-mon[79167]: pgmap v744: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 50 op/s
Oct 10 10:10:45 compute-1 nova_compute[235132]: 2025-10-10 10:10:45.929 2 DEBUG nova.compute.manager [req-357ee546-d167-47fd-9ff2-cca7544349c2 req-786e130a-3f7c-4d2e-b8de-85e01a551076 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Received event network-vif-plugged-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:10:45 compute-1 nova_compute[235132]: 2025-10-10 10:10:45.930 2 DEBUG oslo_concurrency.lockutils [req-357ee546-d167-47fd-9ff2-cca7544349c2 req-786e130a-3f7c-4d2e-b8de-85e01a551076 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:10:45 compute-1 nova_compute[235132]: 2025-10-10 10:10:45.930 2 DEBUG oslo_concurrency.lockutils [req-357ee546-d167-47fd-9ff2-cca7544349c2 req-786e130a-3f7c-4d2e-b8de-85e01a551076 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:10:45 compute-1 nova_compute[235132]: 2025-10-10 10:10:45.930 2 DEBUG oslo_concurrency.lockutils [req-357ee546-d167-47fd-9ff2-cca7544349c2 req-786e130a-3f7c-4d2e-b8de-85e01a551076 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:10:45 compute-1 nova_compute[235132]: 2025-10-10 10:10:45.931 2 DEBUG nova.compute.manager [req-357ee546-d167-47fd-9ff2-cca7544349c2 req-786e130a-3f7c-4d2e-b8de-85e01a551076 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Processing event network-vif-plugged-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 10 10:10:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:10:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:46.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:10:46 compute-1 nova_compute[235132]: 2025-10-10 10:10:46.183 2 DEBUG nova.virt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Emitting event <LifecycleEvent: 1760091046.182954, b8379f65-91e0-45a5-a245-a1bc27260f20 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:10:46 compute-1 nova_compute[235132]: 2025-10-10 10:10:46.184 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] VM Started (Lifecycle Event)
Oct 10 10:10:46 compute-1 nova_compute[235132]: 2025-10-10 10:10:46.187 2 DEBUG nova.compute.manager [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 10 10:10:46 compute-1 nova_compute[235132]: 2025-10-10 10:10:46.192 2 DEBUG nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 10 10:10:46 compute-1 nova_compute[235132]: 2025-10-10 10:10:46.196 2 INFO nova.virt.libvirt.driver [-] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Instance spawned successfully.
Oct 10 10:10:46 compute-1 nova_compute[235132]: 2025-10-10 10:10:46.196 2 DEBUG nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 10 10:10:46 compute-1 nova_compute[235132]: 2025-10-10 10:10:46.248 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:10:46 compute-1 nova_compute[235132]: 2025-10-10 10:10:46.253 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 10 10:10:46 compute-1 nova_compute[235132]: 2025-10-10 10:10:46.263 2 DEBUG nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:10:46 compute-1 nova_compute[235132]: 2025-10-10 10:10:46.263 2 DEBUG nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:10:46 compute-1 nova_compute[235132]: 2025-10-10 10:10:46.264 2 DEBUG nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:10:46 compute-1 nova_compute[235132]: 2025-10-10 10:10:46.264 2 DEBUG nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:10:46 compute-1 nova_compute[235132]: 2025-10-10 10:10:46.265 2 DEBUG nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:10:46 compute-1 nova_compute[235132]: 2025-10-10 10:10:46.265 2 DEBUG nova.virt.libvirt.driver [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:10:46 compute-1 nova_compute[235132]: 2025-10-10 10:10:46.273 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 10 10:10:46 compute-1 nova_compute[235132]: 2025-10-10 10:10:46.273 2 DEBUG nova.virt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Emitting event <LifecycleEvent: 1760091046.183129, b8379f65-91e0-45a5-a245-a1bc27260f20 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:10:46 compute-1 nova_compute[235132]: 2025-10-10 10:10:46.273 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] VM Paused (Lifecycle Event)
Oct 10 10:10:46 compute-1 nova_compute[235132]: 2025-10-10 10:10:46.307 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:10:46 compute-1 nova_compute[235132]: 2025-10-10 10:10:46.311 2 DEBUG nova.virt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Emitting event <LifecycleEvent: 1760091046.1907501, b8379f65-91e0-45a5-a245-a1bc27260f20 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:10:46 compute-1 nova_compute[235132]: 2025-10-10 10:10:46.312 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] VM Resumed (Lifecycle Event)
Oct 10 10:10:46 compute-1 nova_compute[235132]: 2025-10-10 10:10:46.325 2 INFO nova.compute.manager [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Took 13.00 seconds to spawn the instance on the hypervisor.
Oct 10 10:10:46 compute-1 nova_compute[235132]: 2025-10-10 10:10:46.326 2 DEBUG nova.compute.manager [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:10:46 compute-1 nova_compute[235132]: 2025-10-10 10:10:46.333 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:10:46 compute-1 nova_compute[235132]: 2025-10-10 10:10:46.336 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 10 10:10:46 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:46.557 238898 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:10:46 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:46.561 238898 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:10:46 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:46.561 238898 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:10:46 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:46 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:46 compute-1 nova_compute[235132]: 2025-10-10 10:10:46.688 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 10 10:10:46 compute-1 nova_compute[235132]: 2025-10-10 10:10:46.738 2 INFO nova.compute.manager [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Took 14.27 seconds to build instance.
Oct 10 10:10:46 compute-1 nova_compute[235132]: 2025-10-10 10:10:46.760 2 DEBUG oslo_concurrency.lockutils [None req-f06a2ff1-b1c7-4c9d-8f7c-f838566fc144 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "b8379f65-91e0-45a5-a245-a1bc27260f20" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.415s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:10:46 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:10:46 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:46 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:47 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:47.288 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[9c1066f9-2aac-456d-ab48-6178219806ee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:10:47 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:47.289 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbc8bfbd1-b1 in ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 10 10:10:47 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:47.291 238898 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbc8bfbd1-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 10 10:10:47 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:47.292 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[9146cef0-a189-43e8-a5dd-ebe39cb54735]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:10:47 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:47.296 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[9966277b-9347-4e5f-b078-764edeb201e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:10:47 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:47.330 141275 DEBUG oslo.privsep.daemon [-] privsep: reply[78be8b0a-6626-46a1-a0ab-84bd5fb4ab75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:10:47 compute-1 nova_compute[235132]: 2025-10-10 10:10:47.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:10:47 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:47.361 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[3698945f-a38c-4d70-862b-02f4ad0b584a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:10:47 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:47.364 141156 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpni3167tp/privsep.sock']
Oct 10 10:10:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:10:47 compute-1 nova_compute[235132]: 2025-10-10 10:10:47.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:10:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:47 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:47.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:47 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:10:47 compute-1 ceph-mon[79167]: pgmap v745: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 41 op/s
Oct 10 10:10:48 compute-1 nova_compute[235132]: 2025-10-10 10:10:48.009 2 DEBUG nova.compute.manager [req-d65368e9-1fca-4c15-b6cd-118b4f56628c req-03dd76dd-b7e3-4fed-a4be-5d3004636c5d 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Received event network-vif-plugged-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:10:48 compute-1 nova_compute[235132]: 2025-10-10 10:10:48.009 2 DEBUG oslo_concurrency.lockutils [req-d65368e9-1fca-4c15-b6cd-118b4f56628c req-03dd76dd-b7e3-4fed-a4be-5d3004636c5d 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:10:48 compute-1 nova_compute[235132]: 2025-10-10 10:10:48.009 2 DEBUG oslo_concurrency.lockutils [req-d65368e9-1fca-4c15-b6cd-118b4f56628c req-03dd76dd-b7e3-4fed-a4be-5d3004636c5d 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:10:48 compute-1 nova_compute[235132]: 2025-10-10 10:10:48.009 2 DEBUG oslo_concurrency.lockutils [req-d65368e9-1fca-4c15-b6cd-118b4f56628c req-03dd76dd-b7e3-4fed-a4be-5d3004636c5d 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:10:48 compute-1 nova_compute[235132]: 2025-10-10 10:10:48.009 2 DEBUG nova.compute.manager [req-d65368e9-1fca-4c15-b6cd-118b4f56628c req-03dd76dd-b7e3-4fed-a4be-5d3004636c5d 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] No waiting events found dispatching network-vif-plugged-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:10:48 compute-1 nova_compute[235132]: 2025-10-10 10:10:48.009 2 WARNING nova.compute.manager [req-d65368e9-1fca-4c15-b6cd-118b4f56628c req-03dd76dd-b7e3-4fed-a4be-5d3004636c5d 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Received unexpected event network-vif-plugged-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae for instance with vm_state active and task_state None.
Oct 10 10:10:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:48.077 141156 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 10 10:10:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:48.078 141156 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpni3167tp/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 10 10:10:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:47.921 238913 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 10 10:10:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:47.926 238913 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 10 10:10:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:47.928 238913 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 10 10:10:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:47.928 238913 INFO oslo.privsep.daemon [-] privsep daemon running as pid 238913
Oct 10 10:10:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:48.082 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[a142bc6c-8499-462f-87cc-58b9cb8382ef]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:10:48 compute-1 nova_compute[235132]: 2025-10-10 10:10:48.114 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:10:48 compute-1 nova_compute[235132]: 2025-10-10 10:10:48.115 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:10:48 compute-1 nova_compute[235132]: 2025-10-10 10:10:48.115 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:10:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:48.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:48.617 238913 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:10:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:48.617 238913 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:10:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:48.617 238913 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:10:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:48 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:48 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:48 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3226504384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:10:49 compute-1 nova_compute[235132]: 2025-10-10 10:10:49.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:10:49 compute-1 nova_compute[235132]: 2025-10-10 10:10:49.044 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:10:49 compute-1 nova_compute[235132]: 2025-10-10 10:10:49.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:10:49 compute-1 nova_compute[235132]: 2025-10-10 10:10:49.066 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:10:49 compute-1 nova_compute[235132]: 2025-10-10 10:10:49.066 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:10:49 compute-1 nova_compute[235132]: 2025-10-10 10:10:49.066 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:10:49 compute-1 nova_compute[235132]: 2025-10-10 10:10:49.066 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:10:49 compute-1 nova_compute[235132]: 2025-10-10 10:10:49.067 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.197 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[4b17a7d2-41e2-457d-8aac-bfca96ef594b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.203 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[dabf970e-d5d0-4ccb-8194-15291a6ee46d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:10:49 compute-1 NetworkManager[44982]: <info>  [1760091049.2056] manager: (tapbc8bfbd1-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/27)
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.249 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[dbd28f54-e7f0-46fe-aaa8-dbb6d586cd18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.255 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[a83cfd7b-68ea-41fa-aedb-e103a13ce0fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:10:49 compute-1 systemd-udevd[238962]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 10:10:49 compute-1 NetworkManager[44982]: <info>  [1760091049.2844] device (tapbc8bfbd1-b0): carrier: link connected
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.290 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[f568382a-730f-4484-a4d4-98f62825cba2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.316 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[52145dbf-c7e7-4853-937e-d58a6e913cc1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbc8bfbd1-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b0:59:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396299, 'reachable_time': 20753, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238970, 'error': None, 'target': 'ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.332 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[c3cddbb7-fc3e-4f4b-80ff-44a8b600ff00]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb0:5920'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396299, 'tstamp': 396299}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 238998, 'error': None, 'target': 'ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:10:49 compute-1 podman[238944]: 2025-10-10 10:10:49.338662117 +0000 UTC m=+0.094353910 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.350 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[2790fd6b-e8ab-4e44-a359-106a8dbeb61c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbc8bfbd1-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b0:59:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396299, 'reachable_time': 20753, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 239001, 'error': None, 'target': 'ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:10:49 compute-1 podman[238941]: 2025-10-10 10:10:49.361372288 +0000 UTC m=+0.120801934 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.387 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[a4306a50-ecb6-47cb-a863-87076f8ef224]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.454 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[8bd6f323-912c-49bd-9dcb-3474cc23f1aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.456 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc8bfbd1-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.456 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.457 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbc8bfbd1-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:10:49 compute-1 NetworkManager[44982]: <info>  [1760091049.4596] manager: (tapbc8bfbd1-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Oct 10 10:10:49 compute-1 kernel: tapbc8bfbd1-b0: entered promiscuous mode
Oct 10 10:10:49 compute-1 nova_compute[235132]: 2025-10-10 10:10:49.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.465 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbc8bfbd1-b0, col_values=(('external_ids', {'iface-id': '39ed96bc-4f7e-4f78-812d-fbc3e55cd01d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:10:49 compute-1 nova_compute[235132]: 2025-10-10 10:10:49.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:10:49 compute-1 ovn_controller[131749]: 2025-10-10T10:10:49Z|00031|binding|INFO|Releasing lport 39ed96bc-4f7e-4f78-812d-fbc3e55cd01d from this chassis (sb_readonly=0)
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.469 141156 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.477 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[951fdabc-b446-4932-8a34-f52a8efc3bbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.479 141156 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]: global
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]:     log         /dev/log local0 debug
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]:     log-tag     haproxy-metadata-proxy-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]:     user        root
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]:     group       root
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]:     maxconn     1024
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]:     pidfile     /var/lib/neutron/external/pids/bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11.pid.haproxy
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]:     daemon
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]: 
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]: defaults
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]:     log global
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]:     mode http
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]:     option httplog
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]:     option dontlognull
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]:     option http-server-close
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]:     option forwardfor
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]:     retries                 3
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]:     timeout http-request    30s
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]:     timeout connect         30s
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]:     timeout client          32s
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]:     timeout server          32s
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]:     timeout http-keep-alive 30s
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]: 
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]: 
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]: listen listener
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]:     bind 169.254.169.254:80
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]:     server metadata /var/lib/neutron/metadata_proxy
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]:     http-request add-header X-OVN-Network-ID bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 10 10:10:49 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:10:49.479 141156 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11', 'env', 'PROCESS_TAG=haproxy-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 10 10:10:49 compute-1 nova_compute[235132]: 2025-10-10 10:10:49.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:10:49 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:10:49 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2896383186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:10:49 compute-1 nova_compute[235132]: 2025-10-10 10:10:49.565 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:10:49 compute-1 nova_compute[235132]: 2025-10-10 10:10:49.695 2 DEBUG nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 10 10:10:49 compute-1 nova_compute[235132]: 2025-10-10 10:10:49.696 2 DEBUG nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 10 10:10:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:49 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53140016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:49.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:49 compute-1 nova_compute[235132]: 2025-10-10 10:10:49.881 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:10:49 compute-1 nova_compute[235132]: 2025-10-10 10:10:49.883 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4810MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:10:49 compute-1 nova_compute[235132]: 2025-10-10 10:10:49.883 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:10:49 compute-1 nova_compute[235132]: 2025-10-10 10:10:49.883 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:10:49 compute-1 podman[239040]: 2025-10-10 10:10:49.910957885 +0000 UTC m=+0.060967298 container create 938921b816c5789d3219b8ece791d4e46b149d24d31311148bb49416bfe16485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:10:49 compute-1 systemd[1]: Started libpod-conmon-938921b816c5789d3219b8ece791d4e46b149d24d31311148bb49416bfe16485.scope.
Oct 10 10:10:49 compute-1 ceph-mon[79167]: pgmap v746: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct 10 10:10:49 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2896383186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:10:49 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2553159324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:10:49 compute-1 podman[239040]: 2025-10-10 10:10:49.881052577 +0000 UTC m=+0.031061970 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 10 10:10:49 compute-1 systemd[1]: Started libcrun container.
Oct 10 10:10:50 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d9cdc93c5c72ed4f0d25c08523e3110ee9304b40ab9c09ff8991fb32bfd66f7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 10:10:50 compute-1 podman[239040]: 2025-10-10 10:10:50.018592907 +0000 UTC m=+0.168602290 container init 938921b816c5789d3219b8ece791d4e46b149d24d31311148bb49416bfe16485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 10 10:10:50 compute-1 podman[239040]: 2025-10-10 10:10:50.028156418 +0000 UTC m=+0.178165801 container start 938921b816c5789d3219b8ece791d4e46b149d24d31311148bb49416bfe16485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:10:50 compute-1 nova_compute[235132]: 2025-10-10 10:10:50.033 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Instance b8379f65-91e0-45a5-a245-a1bc27260f20 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 10 10:10:50 compute-1 nova_compute[235132]: 2025-10-10 10:10:50.034 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:10:50 compute-1 nova_compute[235132]: 2025-10-10 10:10:50.034 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:10:50 compute-1 neutron-haproxy-ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11[239056]: [NOTICE]   (239060) : New worker (239062) forked
Oct 10 10:10:50 compute-1 neutron-haproxy-ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11[239056]: [NOTICE]   (239060) : Loading success.
Oct 10 10:10:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:10:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:50.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:10:50 compute-1 nova_compute[235132]: 2025-10-10 10:10:50.130 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Refreshing inventories for resource provider c9b2c4a3-cb19-4387-8719-36027e3cdaec _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 10 10:10:50 compute-1 nova_compute[235132]: 2025-10-10 10:10:50.213 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Updating ProviderTree inventory for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 10 10:10:50 compute-1 nova_compute[235132]: 2025-10-10 10:10:50.213 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Updating inventory in ProviderTree for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 10 10:10:50 compute-1 nova_compute[235132]: 2025-10-10 10:10:50.228 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Refreshing aggregate associations for resource provider c9b2c4a3-cb19-4387-8719-36027e3cdaec, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 10 10:10:50 compute-1 nova_compute[235132]: 2025-10-10 10:10:50.255 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Refreshing trait associations for resource provider c9b2c4a3-cb19-4387-8719-36027e3cdaec, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,HW_CPU_X86_AVX,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE42,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 10 10:10:50 compute-1 nova_compute[235132]: 2025-10-10 10:10:50.288 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:10:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:50 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53140016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:50 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:10:50 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3253010054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:10:50 compute-1 nova_compute[235132]: 2025-10-10 10:10:50.749 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:10:50 compute-1 nova_compute[235132]: 2025-10-10 10:10:50.759 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Updating inventory in ProviderTree for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 10 10:10:50 compute-1 nova_compute[235132]: 2025-10-10 10:10:50.823 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Updated inventory for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Oct 10 10:10:50 compute-1 nova_compute[235132]: 2025-10-10 10:10:50.824 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Updating resource provider c9b2c4a3-cb19-4387-8719-36027e3cdaec generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 10 10:10:50 compute-1 nova_compute[235132]: 2025-10-10 10:10:50.824 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Updating inventory in ProviderTree for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 10 10:10:50 compute-1 nova_compute[235132]: 2025-10-10 10:10:50.851 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:10:50 compute-1 nova_compute[235132]: 2025-10-10 10:10:50.852 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.969s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:10:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:50 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:10:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:50 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:10:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:50 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:50 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3253010054' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:10:51 compute-1 ovn_controller[131749]: 2025-10-10T10:10:51Z|00032|binding|INFO|Releasing lport 39ed96bc-4f7e-4f78-812d-fbc3e55cd01d from this chassis (sb_readonly=0)
Oct 10 10:10:51 compute-1 NetworkManager[44982]: <info>  [1760091051.1680] manager: (patch-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/29)
Oct 10 10:10:51 compute-1 NetworkManager[44982]: <info>  [1760091051.1685] device (patch-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 10:10:51 compute-1 NetworkManager[44982]: <info>  [1760091051.1694] manager: (patch-br-int-to-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/30)
Oct 10 10:10:51 compute-1 NetworkManager[44982]: <info>  [1760091051.1696] device (patch-br-int-to-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 10 10:10:51 compute-1 NetworkManager[44982]: <info>  [1760091051.1702] manager: (patch-br-int-to-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Oct 10 10:10:51 compute-1 NetworkManager[44982]: <info>  [1760091051.1707] manager: (patch-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Oct 10 10:10:51 compute-1 NetworkManager[44982]: <info>  [1760091051.1710] device (patch-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 10 10:10:51 compute-1 NetworkManager[44982]: <info>  [1760091051.1712] device (patch-br-int-to-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 10 10:10:51 compute-1 nova_compute[235132]: 2025-10-10 10:10:51.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:10:51 compute-1 ovn_controller[131749]: 2025-10-10T10:10:51Z|00033|binding|INFO|Releasing lport 39ed96bc-4f7e-4f78-812d-fbc3e55cd01d from this chassis (sb_readonly=0)
Oct 10 10:10:51 compute-1 nova_compute[235132]: 2025-10-10 10:10:51.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:10:51 compute-1 nova_compute[235132]: 2025-10-10 10:10:51.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:10:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:51 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53140016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:51.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:51 compute-1 nova_compute[235132]: 2025-10-10 10:10:51.852 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:10:51 compute-1 nova_compute[235132]: 2025-10-10 10:10:51.853 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:10:51 compute-1 nova_compute[235132]: 2025-10-10 10:10:51.854 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:10:51 compute-1 podman[239095]: 2025-10-10 10:10:51.872524845 +0000 UTC m=+0.119933270 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:10:51 compute-1 ceph-mon[79167]: pgmap v747: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 13 KiB/s wr, 64 op/s
Oct 10 10:10:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:10:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:52.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:10:52 compute-1 nova_compute[235132]: 2025-10-10 10:10:52.134 2 DEBUG nova.compute.manager [req-b8927a53-68da-4df2-b951-e1962297a9cc req-b0ce4cf0-2b74-4c86-bb7b-2bc015316322 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Received event network-changed-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:10:52 compute-1 nova_compute[235132]: 2025-10-10 10:10:52.135 2 DEBUG nova.compute.manager [req-b8927a53-68da-4df2-b951-e1962297a9cc req-b0ce4cf0-2b74-4c86-bb7b-2bc015316322 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Refreshing instance network info cache due to event network-changed-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 10 10:10:52 compute-1 nova_compute[235132]: 2025-10-10 10:10:52.136 2 DEBUG oslo_concurrency.lockutils [req-b8927a53-68da-4df2-b951-e1962297a9cc req-b0ce4cf0-2b74-4c86-bb7b-2bc015316322 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:10:52 compute-1 nova_compute[235132]: 2025-10-10 10:10:52.136 2 DEBUG oslo_concurrency.lockutils [req-b8927a53-68da-4df2-b951-e1962297a9cc req-b0ce4cf0-2b74-4c86-bb7b-2bc015316322 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:10:52 compute-1 nova_compute[235132]: 2025-10-10 10:10:52.137 2 DEBUG nova.network.neutron [req-b8927a53-68da-4df2-b951-e1962297a9cc req-b0ce4cf0-2b74-4c86-bb7b-2bc015316322 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Refreshing network info cache for port 3281ffe2-3fe8-4217-bcda-e7f8c55f5dae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 10 10:10:52 compute-1 nova_compute[235132]: 2025-10-10 10:10:52.205 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:10:52 compute-1 nova_compute[235132]: 2025-10-10 10:10:52.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:10:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:10:52 compute-1 nova_compute[235132]: 2025-10-10 10:10:52.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:10:52 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:52 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:52 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:52 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5344001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:53 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1333009736' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:10:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:53 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 10:10:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:53.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 10:10:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:53 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:10:54 compute-1 nova_compute[235132]: 2025-10-10 10:10:53.999 2 DEBUG nova.network.neutron [req-b8927a53-68da-4df2-b951-e1962297a9cc req-b0ce4cf0-2b74-4c86-bb7b-2bc015316322 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Updated VIF entry in instance network info cache for port 3281ffe2-3fe8-4217-bcda-e7f8c55f5dae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 10 10:10:54 compute-1 nova_compute[235132]: 2025-10-10 10:10:54.000 2 DEBUG nova.network.neutron [req-b8927a53-68da-4df2-b951-e1962297a9cc req-b0ce4cf0-2b74-4c86-bb7b-2bc015316322 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Updating instance_info_cache with network_info: [{"id": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "address": "fa:16:3e:f6:78:9d", "network": {"id": "bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11", "bridge": "br-int", "label": "tempest-network-smoke--1013272438", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3281ffe2-3f", "ovs_interfaceid": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:10:54 compute-1 nova_compute[235132]: 2025-10-10 10:10:54.024 2 DEBUG oslo_concurrency.lockutils [req-b8927a53-68da-4df2-b951-e1962297a9cc req-b0ce4cf0-2b74-4c86-bb7b-2bc015316322 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:10:54 compute-1 ceph-mon[79167]: pgmap v748: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 82 op/s
Oct 10 10:10:54 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/547334843' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:10:54 compute-1 nova_compute[235132]: 2025-10-10 10:10:54.026 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquired lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:10:54 compute-1 nova_compute[235132]: 2025-10-10 10:10:54.027 2 DEBUG nova.network.neutron [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 10 10:10:54 compute-1 nova_compute[235132]: 2025-10-10 10:10:54.028 2 DEBUG nova.objects.instance [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lazy-loading 'info_cache' on Instance uuid b8379f65-91e0-45a5-a245-a1bc27260f20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:10:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:54.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:54 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:54 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53140016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:54 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:54 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:55 compute-1 nova_compute[235132]: 2025-10-10 10:10:55.342 2 DEBUG nova.network.neutron [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Updating instance_info_cache with network_info: [{"id": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "address": "fa:16:3e:f6:78:9d", "network": {"id": "bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11", "bridge": "br-int", "label": "tempest-network-smoke--1013272438", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3281ffe2-3f", "ovs_interfaceid": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:10:55 compute-1 nova_compute[235132]: 2025-10-10 10:10:55.366 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Releasing lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:10:55 compute-1 nova_compute[235132]: 2025-10-10 10:10:55.367 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 10 10:10:55 compute-1 nova_compute[235132]: 2025-10-10 10:10:55.368 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:10:55 compute-1 nova_compute[235132]: 2025-10-10 10:10:55.368 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:10:55 compute-1 nova_compute[235132]: 2025-10-10 10:10:55.369 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:10:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:55 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53440023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:55.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:56 compute-1 ceph-mon[79167]: pgmap v749: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Oct 10 10:10:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:56.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:56 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:56 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:56 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:56 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53140016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:57 compute-1 sudo[239125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:10:57 compute-1 sudo[239125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:10:57 compute-1 sudo[239125]: pam_unix(sudo:session): session closed for user root
Oct 10 10:10:57 compute-1 nova_compute[235132]: 2025-10-10 10:10:57.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:10:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:10:57 compute-1 nova_compute[235132]: 2025-10-10 10:10:57.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:10:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:57 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:10:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:57.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:10:57 compute-1 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 10 10:10:58 compute-1 ceph-mon[79167]: pgmap v750: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Oct 10 10:10:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:10:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:10:58.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:10:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:58 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53440023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/101058 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:10:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:58 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:59 compute-1 ovn_controller[131749]: 2025-10-10T10:10:59Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f6:78:9d 10.100.0.6
Oct 10 10:10:59 compute-1 ovn_controller[131749]: 2025-10-10T10:10:59Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f6:78:9d 10.100.0.6
Oct 10 10:10:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:10:59 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53140032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:10:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:10:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:10:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:10:59.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:11:00 compute-1 ceph-mon[79167]: pgmap v751: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 79 op/s
Oct 10 10:11:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:00.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:00 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:00 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:00 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:00 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53440030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:01 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:01.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:02 compute-1 ceph-mon[79167]: pgmap v752: 353 pgs: 353 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 584 KiB/s rd, 938 B/s wr, 23 op/s
Oct 10 10:11:02 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:11:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:02.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:02 compute-1 nova_compute[235132]: 2025-10-10 10:11:02.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:11:02 compute-1 nova_compute[235132]: 2025-10-10 10:11:02.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:02 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53140032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:02 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:03 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53440030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:03.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:04 compute-1 ceph-mon[79167]: pgmap v753: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 903 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Oct 10 10:11:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:04.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:04 compute-1 nova_compute[235132]: 2025-10-10 10:11:04.640 2 INFO nova.compute.manager [None req-6625c77b-2558-407d-bb18-30af1a7e06f6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Get console output
Oct 10 10:11:04 compute-1 nova_compute[235132]: 2025-10-10 10:11:04.646 2 INFO oslo.privsep.daemon [None req-6625c77b-2558-407d-bb18-30af1a7e06f6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp44lnzc9u/privsep.sock']
Oct 10 10:11:04 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:04 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:04 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:04 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53440030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:05 compute-1 nova_compute[235132]: 2025-10-10 10:11:05.365 2 INFO oslo.privsep.daemon [None req-6625c77b-2558-407d-bb18-30af1a7e06f6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Spawned new privsep daemon via rootwrap
Oct 10 10:11:05 compute-1 nova_compute[235132]: 2025-10-10 10:11:05.233 631 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 10 10:11:05 compute-1 nova_compute[235132]: 2025-10-10 10:11:05.239 631 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 10 10:11:05 compute-1 nova_compute[235132]: 2025-10-10 10:11:05.243 631 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 10 10:11:05 compute-1 nova_compute[235132]: 2025-10-10 10:11:05.244 631 INFO oslo.privsep.daemon [-] privsep daemon running as pid 631
Oct 10 10:11:05 compute-1 nova_compute[235132]: 2025-10-10 10:11:05.476 631 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 10 10:11:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:05 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:11:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:05.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:11:05 compute-1 podman[239162]: 2025-10-10 10:11:05.969303367 +0000 UTC m=+0.075681981 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:11:06 compute-1 ceph-mon[79167]: pgmap v754: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:11:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:06.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:06 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:06 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:07 compute-1 nova_compute[235132]: 2025-10-10 10:11:07.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:11:07 compute-1 nova_compute[235132]: 2025-10-10 10:11:07.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:07 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53440041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:11:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:07.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:11:08 compute-1 ceph-mon[79167]: pgmap v755: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:11:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:11:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:08.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:11:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:08 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:08 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:09 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:09.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:11:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:10.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:11:10 compute-1 ceph-mon[79167]: pgmap v756: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 10 10:11:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:10 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53440041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:10 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:11 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:11:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:11.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:11:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:12.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:12 compute-1 ceph-mon[79167]: pgmap v757: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 10 10:11:12 compute-1 nova_compute[235132]: 2025-10-10 10:11:12.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:11:12 compute-1 nova_compute[235132]: 2025-10-10 10:11:12.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:12 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:12 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53440041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:13 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:13.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:14.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:14 compute-1 ceph-mon[79167]: pgmap v758: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 10 10:11:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003c70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:15 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53440041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:15.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:16.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:16 compute-1 ceph-mon[79167]: pgmap v759: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 16 KiB/s wr, 1 op/s
Oct 10 10:11:16 compute-1 sudo[239187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:11:16 compute-1 sudo[239187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:11:16 compute-1 sudo[239187]: pam_unix(sudo:session): session closed for user root
Oct 10 10:11:16 compute-1 sudo[239212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:11:16 compute-1 sudo[239212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:11:16 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:16 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:16 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:16 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:17 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:11:17 compute-1 nova_compute[235132]: 2025-10-10 10:11:17.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:17 compute-1 sudo[239258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:11:17 compute-1 sudo[239258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:11:17 compute-1 sudo[239258]: pam_unix(sudo:session): session closed for user root
Oct 10 10:11:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:11:17 compute-1 sudo[239212]: pam_unix(sudo:session): session closed for user root
Oct 10 10:11:17 compute-1 nova_compute[235132]: 2025-10-10 10:11:17.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:17 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340001080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:17.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:11:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:18.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:11:18 compute-1 ceph-mon[79167]: pgmap v760: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 16 KiB/s wr, 1 op/s
Oct 10 10:11:18 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:11:18 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:11:18 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:11:18 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:11:18 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:11:18 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:11:18 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:11:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:18 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:18 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:19 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3887558386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0.
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:11:19.255656) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091079255706, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 880, "num_deletes": 251, "total_data_size": 1686469, "memory_usage": 1712720, "flush_reason": "Manual Compaction"}
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091079264770, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 1112996, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25114, "largest_seqno": 25989, "table_properties": {"data_size": 1108996, "index_size": 1716, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9395, "raw_average_key_size": 19, "raw_value_size": 1100701, "raw_average_value_size": 2307, "num_data_blocks": 77, "num_entries": 477, "num_filter_entries": 477, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760091020, "oldest_key_time": 1760091020, "file_creation_time": 1760091079, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 9146 microseconds, and 3977 cpu microseconds.
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:11:19.264805) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 1112996 bytes OK
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:11:19.264824) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:11:19.265897) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:11:19.265909) EVENT_LOG_v1 {"time_micros": 1760091079265905, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:11:19.265924) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1681942, prev total WAL file size 1681942, number of live WAL files 2.
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:11:19.266522) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(1086KB)], [48(12MB)]
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091079266596, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 14536242, "oldest_snapshot_seqno": -1}
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 5369 keys, 12453499 bytes, temperature: kUnknown
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091079317678, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 12453499, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12418465, "index_size": 20524, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 137899, "raw_average_key_size": 25, "raw_value_size": 12321874, "raw_average_value_size": 2295, "num_data_blocks": 834, "num_entries": 5369, "num_filter_entries": 5369, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760091079, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:11:19.318124) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 12453499 bytes
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:11:19.319554) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 283.7 rd, 243.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 12.8 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(24.2) write-amplify(11.2) OK, records in: 5887, records dropped: 518 output_compression: NoCompression
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:11:19.319586) EVENT_LOG_v1 {"time_micros": 1760091079319570, "job": 28, "event": "compaction_finished", "compaction_time_micros": 51244, "compaction_time_cpu_micros": 30794, "output_level": 6, "num_output_files": 1, "total_output_size": 12453499, "num_input_records": 5887, "num_output_records": 5369, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091079320113, "job": 28, "event": "table_file_deletion", "file_number": 50}
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091079325030, "job": 28, "event": "table_file_deletion", "file_number": 48}
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:11:19.266401) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:11:19.325082) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:11:19.325090) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:11:19.325093) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:11:19.325096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:11:19 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:11:19.325098) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:11:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:19 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:19.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:19 compute-1 podman[239297]: 2025-10-10 10:11:19.98524545 +0000 UTC m=+0.086094716 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 10 10:11:20 compute-1 podman[239298]: 2025-10-10 10:11:20.013975835 +0000 UTC m=+0.115283813 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 10 10:11:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:20.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:20 compute-1 ceph-mon[79167]: pgmap v761: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 17 KiB/s wr, 1 op/s
Oct 10 10:11:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:20 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340001080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:20 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:21 compute-1 ceph-mon[79167]: pgmap v762: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 4.3 KiB/s wr, 0 op/s
Oct 10 10:11:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:21 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:21.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:22.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:22 compute-1 nova_compute[235132]: 2025-10-10 10:11:22.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:11:22 compute-1 sudo[239338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:11:22 compute-1 sudo[239338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:11:22 compute-1 sudo[239338]: pam_unix(sudo:session): session closed for user root
Oct 10 10:11:22 compute-1 podman[239362]: 2025-10-10 10:11:22.609535279 +0000 UTC m=+0.119728835 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=ovn_controller)
Oct 10 10:11:22 compute-1 nova_compute[235132]: 2025-10-10 10:11:22.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:22 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:22 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:11:22.751 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:11:22 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:11:22.753 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 10 10:11:22 compute-1 nova_compute[235132]: 2025-10-10 10:11:22.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:22 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340002370 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:23 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:11:23 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:11:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:23 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:11:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:23.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:11:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:24.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:24 compute-1 ceph-mon[79167]: pgmap v763: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 10 10:11:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:24 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:24 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:25 compute-1 ceph-mon[79167]: pgmap v764: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 10 10:11:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:25 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340002370 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:11:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:25.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:11:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:26.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:26 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/46960711' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:11:26 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/46960711' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:11:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:26 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:26 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:27 compute-1 nova_compute[235132]: 2025-10-10 10:11:27.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:11:27 compute-1 ceph-mon[79167]: pgmap v765: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 10 10:11:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3305200522' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:11:27 compute-1 nova_compute[235132]: 2025-10-10 10:11:27.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:27 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/101127 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:11:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:27.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:28.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:28 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3006987527' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:11:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:28 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:28 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:29 compute-1 ceph-mon[79167]: pgmap v766: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 10 10:11:29 compute-1 sshd-session[239390]: Connection reset by 198.235.24.239 port 61830 [preauth]
Oct 10 10:11:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:29 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:29.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:30.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:30 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:30 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:30 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:30 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:31 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:31 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:11:31.755 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ee0899c1-415d-4aa8-abe8-1240b4e8bf2c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:11:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:31.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:31 compute-1 ceph-mon[79167]: pgmap v767: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 10 10:11:31 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:11:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:32.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:32 compute-1 nova_compute[235132]: 2025-10-10 10:11:32.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:11:32 compute-1 nova_compute[235132]: 2025-10-10 10:11:32.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:32 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:32 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:33 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340003410 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:11:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:33.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:11:33 compute-1 ceph-mon[79167]: pgmap v768: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 44 op/s
Oct 10 10:11:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:11:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:34.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:11:34 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:34 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:34 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:34 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53200037d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:35 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:35.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:35 compute-1 ceph-mon[79167]: pgmap v769: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 15 KiB/s wr, 10 op/s
Oct 10 10:11:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:11:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:36.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:11:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:36 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:36 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328003e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:37 compute-1 podman[239398]: 2025-10-10 10:11:37.006462999 +0000 UTC m=+0.098440602 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 10 10:11:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:37 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:11:37 compute-1 nova_compute[235132]: 2025-10-10 10:11:37.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:11:37 compute-1 sudo[239418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:11:37 compute-1 sudo[239418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:11:37 compute-1 sudo[239418]: pam_unix(sudo:session): session closed for user root
Oct 10 10:11:37 compute-1 nova_compute[235132]: 2025-10-10 10:11:37.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:37 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53200037d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:37.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:38 compute-1 ceph-mon[79167]: pgmap v770: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 15 KiB/s wr, 10 op/s
Oct 10 10:11:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:38.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:38 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:38 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:39 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:39.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:40 compute-1 ceph-mon[79167]: pgmap v771: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 76 op/s
Oct 10 10:11:40 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:40 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:11:40 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:40 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:11:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:40.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:40 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:40 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:40 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:40 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53200044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:41 compute-1 nova_compute[235132]: 2025-10-10 10:11:41.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:41 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:41.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:42 compute-1 ceph-mon[79167]: pgmap v772: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 76 op/s
Oct 10 10:11:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:42.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:11:42.206 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:11:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:11:42.206 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:11:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:11:42.207 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:11:42 compute-1 nova_compute[235132]: 2025-10-10 10:11:42.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:11:42 compute-1 nova_compute[235132]: 2025-10-10 10:11:42.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:42 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:42 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:42 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:42 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:43 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:11:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:43 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:11:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:43.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:11:44 compute-1 ceph-mon[79167]: pgmap v773: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 77 op/s
Oct 10 10:11:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:44.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:44 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:44 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:45 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:45.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:46 compute-1 ceph-mon[79167]: pgmap v774: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 KiB/s wr, 67 op/s
Oct 10 10:11:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:46.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:46 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:46 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:46 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:46 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340003d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:47 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:11:47 compute-1 nova_compute[235132]: 2025-10-10 10:11:47.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:11:47 compute-1 nova_compute[235132]: 2025-10-10 10:11:47.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:47 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:11:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:47.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:11:48 compute-1 nova_compute[235132]: 2025-10-10 10:11:48.046 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:11:48 compute-1 nova_compute[235132]: 2025-10-10 10:11:48.047 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:11:48 compute-1 ceph-mon[79167]: pgmap v775: 353 pgs: 353 active+clean; 167 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 KiB/s wr, 67 op/s
Oct 10 10:11:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:48.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:48 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:48 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:49 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340003d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/101149 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:11:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:11:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:49.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:11:50 compute-1 nova_compute[235132]: 2025-10-10 10:11:50.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:11:50 compute-1 nova_compute[235132]: 2025-10-10 10:11:50.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:11:50 compute-1 nova_compute[235132]: 2025-10-10 10:11:50.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:11:50 compute-1 ceph-mon[79167]: pgmap v776: 353 pgs: 353 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Oct 10 10:11:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:50.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:50 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340003d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:50 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:51 compute-1 podman[239451]: 2025-10-10 10:11:51.006109148 +0000 UTC m=+0.094382031 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, managed_by=edpm_ansible, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 10 10:11:51 compute-1 podman[239452]: 2025-10-10 10:11:51.009617854 +0000 UTC m=+0.096949252 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 10 10:11:51 compute-1 nova_compute[235132]: 2025-10-10 10:11:51.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:11:51 compute-1 nova_compute[235132]: 2025-10-10 10:11:51.044 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:11:51 compute-1 nova_compute[235132]: 2025-10-10 10:11:51.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:11:51 compute-1 nova_compute[235132]: 2025-10-10 10:11:51.193 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:11:51 compute-1 nova_compute[235132]: 2025-10-10 10:11:51.194 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquired lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:11:51 compute-1 nova_compute[235132]: 2025-10-10 10:11:51.195 2 DEBUG nova.network.neutron [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 10 10:11:51 compute-1 nova_compute[235132]: 2025-10-10 10:11:51.195 2 DEBUG nova.objects.instance [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lazy-loading 'info_cache' on Instance uuid b8379f65-91e0-45a5-a245-a1bc27260f20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:11:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:51 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53200044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:51.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:52 compute-1 ceph-mon[79167]: pgmap v777: 353 pgs: 353 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:11:52 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1869105366' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:11:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:11:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:52.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:11:52 compute-1 nova_compute[235132]: 2025-10-10 10:11:52.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:11:52 compute-1 nova_compute[235132]: 2025-10-10 10:11:52.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:52 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:52 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:52 compute-1 podman[239494]: 2025-10-10 10:11:52.961798917 +0000 UTC m=+0.076073510 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 10 10:11:52 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:52 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:53 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1716989008' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:11:53 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3689505701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:11:53 compute-1 nova_compute[235132]: 2025-10-10 10:11:53.239 2 DEBUG nova.network.neutron [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Updating instance_info_cache with network_info: [{"id": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "address": "fa:16:3e:f6:78:9d", "network": {"id": "bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11", "bridge": "br-int", "label": "tempest-network-smoke--1013272438", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3281ffe2-3f", "ovs_interfaceid": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:11:53 compute-1 nova_compute[235132]: 2025-10-10 10:11:53.253 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Releasing lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:11:53 compute-1 nova_compute[235132]: 2025-10-10 10:11:53.254 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 10 10:11:53 compute-1 nova_compute[235132]: 2025-10-10 10:11:53.254 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:11:53 compute-1 nova_compute[235132]: 2025-10-10 10:11:53.255 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:11:53 compute-1 nova_compute[235132]: 2025-10-10 10:11:53.255 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:11:53 compute-1 nova_compute[235132]: 2025-10-10 10:11:53.255 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:11:53 compute-1 nova_compute[235132]: 2025-10-10 10:11:53.279 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:11:53 compute-1 nova_compute[235132]: 2025-10-10 10:11:53.280 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:11:53 compute-1 nova_compute[235132]: 2025-10-10 10:11:53.281 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:11:53 compute-1 nova_compute[235132]: 2025-10-10 10:11:53.281 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:11:53 compute-1 nova_compute[235132]: 2025-10-10 10:11:53.281 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:11:53 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:11:53 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/846796813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:11:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:53 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:53 compute-1 nova_compute[235132]: 2025-10-10 10:11:53.771 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:11:53 compute-1 nova_compute[235132]: 2025-10-10 10:11:53.846 2 DEBUG nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 10 10:11:53 compute-1 nova_compute[235132]: 2025-10-10 10:11:53.847 2 DEBUG nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 10 10:11:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:53.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:54 compute-1 nova_compute[235132]: 2025-10-10 10:11:54.020 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:11:54 compute-1 nova_compute[235132]: 2025-10-10 10:11:54.021 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4725MB free_disk=59.89714813232422GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:11:54 compute-1 nova_compute[235132]: 2025-10-10 10:11:54.022 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:11:54 compute-1 nova_compute[235132]: 2025-10-10 10:11:54.022 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:11:54 compute-1 nova_compute[235132]: 2025-10-10 10:11:54.085 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Instance b8379f65-91e0-45a5-a245-a1bc27260f20 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 10 10:11:54 compute-1 nova_compute[235132]: 2025-10-10 10:11:54.085 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:11:54 compute-1 nova_compute[235132]: 2025-10-10 10:11:54.086 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:11:54 compute-1 nova_compute[235132]: 2025-10-10 10:11:54.125 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:11:54 compute-1 ceph-mon[79167]: pgmap v778: 353 pgs: 353 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 10 10:11:54 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/3217553215' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:11:54 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/846796813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:11:54 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/3655824811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:11:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:11:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:54.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:11:54 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:11:54 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3776578443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:11:54 compute-1 nova_compute[235132]: 2025-10-10 10:11:54.649 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:11:54 compute-1 nova_compute[235132]: 2025-10-10 10:11:54.658 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:11:54 compute-1 nova_compute[235132]: 2025-10-10 10:11:54.680 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:11:54 compute-1 nova_compute[235132]: 2025-10-10 10:11:54.683 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:11:54 compute-1 nova_compute[235132]: 2025-10-10 10:11:54.683 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:11:54 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:54 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:54 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:55 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3776578443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:11:55 compute-1 nova_compute[235132]: 2025-10-10 10:11:55.679 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:11:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:55 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:11:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:55.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:11:56 compute-1 ceph-mon[79167]: pgmap v779: 353 pgs: 353 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 10 10:11:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:56.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:56 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:56 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:57 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:57 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3981716478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:11:57 compute-1 nova_compute[235132]: 2025-10-10 10:11:57.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:11:57 compute-1 sudo[239568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:11:57 compute-1 sudo[239568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:11:57 compute-1 nova_compute[235132]: 2025-10-10 10:11:57.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:57 compute-1 sudo[239568]: pam_unix(sudo:session): session closed for user root
Oct 10 10:11:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:57 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340003d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:57.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:58 compute-1 ceph-mon[79167]: pgmap v780: 353 pgs: 353 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 10 10:11:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:11:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:11:58.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:11:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:58 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53200044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:59 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:59 compute-1 ovn_controller[131749]: 2025-10-10T10:11:59Z|00034|binding|INFO|Releasing lport 39ed96bc-4f7e-4f78-812d-fbc3e55cd01d from this chassis (sb_readonly=0)
Oct 10 10:11:59 compute-1 nova_compute[235132]: 2025-10-10 10:11:59.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:11:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:11:59 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:11:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:11:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:11:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:11:59.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.154 2 DEBUG nova.compute.manager [req-4a519e4a-e9dd-4f5f-b0ea-f9569588b5cd req-bda9577c-483e-444e-878a-9bb48c503849 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Received event network-changed-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.155 2 DEBUG nova.compute.manager [req-4a519e4a-e9dd-4f5f-b0ea-f9569588b5cd req-bda9577c-483e-444e-878a-9bb48c503849 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Refreshing instance network info cache due to event network-changed-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.155 2 DEBUG oslo_concurrency.lockutils [req-4a519e4a-e9dd-4f5f-b0ea-f9569588b5cd req-bda9577c-483e-444e-878a-9bb48c503849 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.156 2 DEBUG oslo_concurrency.lockutils [req-4a519e4a-e9dd-4f5f-b0ea-f9569588b5cd req-bda9577c-483e-444e-878a-9bb48c503849 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.156 2 DEBUG nova.network.neutron [req-4a519e4a-e9dd-4f5f-b0ea-f9569588b5cd req-bda9577c-483e-444e-878a-9bb48c503849 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Refreshing network info cache for port 3281ffe2-3fe8-4217-bcda-e7f8c55f5dae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 10 10:12:00 compute-1 ceph-mon[79167]: pgmap v781: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 341 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Oct 10 10:12:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:00.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.229 2 DEBUG oslo_concurrency.lockutils [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "b8379f65-91e0-45a5-a245-a1bc27260f20" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.230 2 DEBUG oslo_concurrency.lockutils [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "b8379f65-91e0-45a5-a245-a1bc27260f20" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.230 2 DEBUG oslo_concurrency.lockutils [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.230 2 DEBUG oslo_concurrency.lockutils [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.231 2 DEBUG oslo_concurrency.lockutils [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.233 2 INFO nova.compute.manager [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Terminating instance
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.234 2 DEBUG nova.compute.manager [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 10 10:12:00 compute-1 kernel: tap3281ffe2-3f (unregistering): left promiscuous mode
Oct 10 10:12:00 compute-1 NetworkManager[44982]: <info>  [1760091120.3023] device (tap3281ffe2-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:00 compute-1 ovn_controller[131749]: 2025-10-10T10:12:00Z|00035|binding|INFO|Releasing lport 3281ffe2-3fe8-4217-bcda-e7f8c55f5dae from this chassis (sb_readonly=0)
Oct 10 10:12:00 compute-1 ovn_controller[131749]: 2025-10-10T10:12:00Z|00036|binding|INFO|Setting lport 3281ffe2-3fe8-4217-bcda-e7f8c55f5dae down in Southbound
Oct 10 10:12:00 compute-1 ovn_controller[131749]: 2025-10-10T10:12:00Z|00037|binding|INFO|Removing iface tap3281ffe2-3f ovn-installed in OVS
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:00 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:00.328 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:78:9d 10.100.0.6'], port_security=['fa:16:3e:f6:78:9d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b8379f65-91e0-45a5-a245-a1bc27260f20', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9213b2d5-68f1-49a1-a3cf-ea56345963fa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4cf25de6-ad2e-407a-bd52-f4f32badc3ec, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], logical_port=3281ffe2-3fe8-4217-bcda-e7f8c55f5dae) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:12:00 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:00.330 141156 INFO neutron.agent.ovn.metadata.agent [-] Port 3281ffe2-3fe8-4217-bcda-e7f8c55f5dae in datapath bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11 unbound from our chassis
Oct 10 10:12:00 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:00.332 141156 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 10 10:12:00 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:00.334 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[a4c69455-09d6-4943-b9b2-80660c53cb71]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:12:00 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:00.337 141156 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11 namespace which is not needed anymore
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:00 compute-1 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Oct 10 10:12:00 compute-1 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 16.900s CPU time.
Oct 10 10:12:00 compute-1 systemd-machined[191637]: Machine qemu-1-instance-00000001 terminated.
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.480 2 INFO nova.virt.libvirt.driver [-] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Instance destroyed successfully.
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.480 2 DEBUG nova.objects.instance [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'resources' on Instance uuid b8379f65-91e0-45a5-a245-a1bc27260f20 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.496 2 DEBUG nova.virt.libvirt.vif [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:10:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1823645149',display_name='tempest-TestNetworkBasicOps-server-1823645149',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1823645149',id=1,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH1cySxBL6pw+6qEpturfgqFpVsnU32fmvYm1ovqdR9d7Yu/HsSXnbP11SE0LsPImrqW3NM7Ipp+q9ZG2BlkPbNPH4TMiwgnLU7hJmzvd5980ZxncdeOwTfn8+UHeM5LSQ==',key_name='tempest-TestNetworkBasicOps-1606841299',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:10:46Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-riep0t81',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:10:46Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=b8379f65-91e0-45a5-a245-a1bc27260f20,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "address": "fa:16:3e:f6:78:9d", "network": {"id": "bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11", "bridge": "br-int", "label": "tempest-network-smoke--1013272438", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3281ffe2-3f", "ovs_interfaceid": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.496 2 DEBUG nova.network.os_vif_util [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "address": "fa:16:3e:f6:78:9d", "network": {"id": "bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11", "bridge": "br-int", "label": "tempest-network-smoke--1013272438", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3281ffe2-3f", "ovs_interfaceid": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.497 2 DEBUG nova.network.os_vif_util [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:78:9d,bridge_name='br-int',has_traffic_filtering=True,id=3281ffe2-3fe8-4217-bcda-e7f8c55f5dae,network=Network(bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3281ffe2-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.497 2 DEBUG os_vif [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:78:9d,bridge_name='br-int',has_traffic_filtering=True,id=3281ffe2-3fe8-4217-bcda-e7f8c55f5dae,network=Network(bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3281ffe2-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.500 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3281ffe2-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:00 compute-1 neutron-haproxy-ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11[239056]: [NOTICE]   (239060) : haproxy version is 2.8.14-c23fe91
Oct 10 10:12:00 compute-1 neutron-haproxy-ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11[239056]: [NOTICE]   (239060) : path to executable is /usr/sbin/haproxy
Oct 10 10:12:00 compute-1 neutron-haproxy-ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11[239056]: [WARNING]  (239060) : Exiting Master process...
Oct 10 10:12:00 compute-1 neutron-haproxy-ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11[239056]: [WARNING]  (239060) : Exiting Master process...
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 10 10:12:00 compute-1 neutron-haproxy-ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11[239056]: [ALERT]    (239060) : Current worker (239062) exited with code 143 (Terminated)
Oct 10 10:12:00 compute-1 neutron-haproxy-ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11[239056]: [WARNING]  (239060) : All workers exited. Exiting... (0)
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.562 2 INFO os_vif [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:78:9d,bridge_name='br-int',has_traffic_filtering=True,id=3281ffe2-3fe8-4217-bcda-e7f8c55f5dae,network=Network(bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3281ffe2-3f')
Oct 10 10:12:00 compute-1 systemd[1]: libpod-938921b816c5789d3219b8ece791d4e46b149d24d31311148bb49416bfe16485.scope: Deactivated successfully.
Oct 10 10:12:00 compute-1 podman[239619]: 2025-10-10 10:12:00.570043032 +0000 UTC m=+0.110396680 container died 938921b816c5789d3219b8ece791d4e46b149d24d31311148bb49416bfe16485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:12:00 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-938921b816c5789d3219b8ece791d4e46b149d24d31311148bb49416bfe16485-userdata-shm.mount: Deactivated successfully.
Oct 10 10:12:00 compute-1 systemd[1]: var-lib-containers-storage-overlay-9d9cdc93c5c72ed4f0d25c08523e3110ee9304b40ab9c09ff8991fb32bfd66f7-merged.mount: Deactivated successfully.
Oct 10 10:12:00 compute-1 podman[239619]: 2025-10-10 10:12:00.61386426 +0000 UTC m=+0.154217868 container cleanup 938921b816c5789d3219b8ece791d4e46b149d24d31311148bb49416bfe16485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 10:12:00 compute-1 systemd[1]: libpod-conmon-938921b816c5789d3219b8ece791d4e46b149d24d31311148bb49416bfe16485.scope: Deactivated successfully.
Oct 10 10:12:00 compute-1 podman[239673]: 2025-10-10 10:12:00.702458082 +0000 UTC m=+0.062196831 container remove 938921b816c5789d3219b8ece791d4e46b149d24d31311148bb49416bfe16485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:12:00 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:00.713 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[7f726503-2a00-46a1-b19f-fa623f41c72f]: (4, ('Fri Oct 10 10:12:00 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11 (938921b816c5789d3219b8ece791d4e46b149d24d31311148bb49416bfe16485)\n938921b816c5789d3219b8ece791d4e46b149d24d31311148bb49416bfe16485\nFri Oct 10 10:12:00 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11 (938921b816c5789d3219b8ece791d4e46b149d24d31311148bb49416bfe16485)\n938921b816c5789d3219b8ece791d4e46b149d24d31311148bb49416bfe16485\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:12:00 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:00.716 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[68dcf478-702f-4d2c-9ced-e26d788cc1cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:12:00 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:00.718 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc8bfbd1-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:00 compute-1 kernel: tapbc8bfbd1-b0: left promiscuous mode
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:00 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:00.750 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[563d056a-97b2-416b-a0c0-5e520923c45e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:12:00 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:00 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340003d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:00 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:00.777 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[8d7eebd6-271d-4338-bfc5-d3376a8086e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:12:00 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:00.779 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[2c7a4495-ea16-471f-95fd-0c567c93f132]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:12:00 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:00.800 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[58162ba0-99a6-47e9-9556-5680e1a484eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396289, 'reachable_time': 36370, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239694, 'error': None, 'target': 'ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:12:00 compute-1 systemd[1]: run-netns-ovnmeta\x2dbc8bfbd1\x2db5ac\x2d42d3\x2db24d\x2dbaf38dabaf11.mount: Deactivated successfully.
Oct 10 10:12:00 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:00.825 141275 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 10 10:12:00 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:00.825 141275 DEBUG oslo.privsep.daemon [-] privsep: reply[77192183-6022-4e3c-a832-ab8147580542]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.958 2 DEBUG nova.compute.manager [req-ad176790-af2b-49bc-896f-d2f365d1404b req-03ea29c6-5bae-48c4-aa3d-91abeba67f7c 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Received event network-vif-unplugged-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.959 2 DEBUG oslo_concurrency.lockutils [req-ad176790-af2b-49bc-896f-d2f365d1404b req-03ea29c6-5bae-48c4-aa3d-91abeba67f7c 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.960 2 DEBUG oslo_concurrency.lockutils [req-ad176790-af2b-49bc-896f-d2f365d1404b req-03ea29c6-5bae-48c4-aa3d-91abeba67f7c 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.960 2 DEBUG oslo_concurrency.lockutils [req-ad176790-af2b-49bc-896f-d2f365d1404b req-03ea29c6-5bae-48c4-aa3d-91abeba67f7c 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.960 2 DEBUG nova.compute.manager [req-ad176790-af2b-49bc-896f-d2f365d1404b req-03ea29c6-5bae-48c4-aa3d-91abeba67f7c 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] No waiting events found dispatching network-vif-unplugged-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.960 2 DEBUG nova.compute.manager [req-ad176790-af2b-49bc-896f-d2f365d1404b req-03ea29c6-5bae-48c4-aa3d-91abeba67f7c 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Received event network-vif-unplugged-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.995 2 INFO nova.virt.libvirt.driver [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Deleting instance files /var/lib/nova/instances/b8379f65-91e0-45a5-a245-a1bc27260f20_del
Oct 10 10:12:00 compute-1 nova_compute[235132]: 2025-10-10 10:12:00.996 2 INFO nova.virt.libvirt.driver [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Deletion of /var/lib/nova/instances/b8379f65-91e0-45a5-a245-a1bc27260f20_del complete
Oct 10 10:12:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:01 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53200044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:01 compute-1 nova_compute[235132]: 2025-10-10 10:12:01.061 2 DEBUG nova.virt.libvirt.host [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Oct 10 10:12:01 compute-1 nova_compute[235132]: 2025-10-10 10:12:01.062 2 INFO nova.virt.libvirt.host [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] UEFI support detected
Oct 10 10:12:01 compute-1 nova_compute[235132]: 2025-10-10 10:12:01.064 2 INFO nova.compute.manager [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Took 0.83 seconds to destroy the instance on the hypervisor.
Oct 10 10:12:01 compute-1 nova_compute[235132]: 2025-10-10 10:12:01.065 2 DEBUG oslo.service.loopingcall [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 10 10:12:01 compute-1 nova_compute[235132]: 2025-10-10 10:12:01.065 2 DEBUG nova.compute.manager [-] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 10 10:12:01 compute-1 nova_compute[235132]: 2025-10-10 10:12:01.066 2 DEBUG nova.network.neutron [-] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 10 10:12:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:01 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003e10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:12:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:01.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:12:01 compute-1 nova_compute[235132]: 2025-10-10 10:12:01.985 2 DEBUG nova.network.neutron [-] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:12:02 compute-1 nova_compute[235132]: 2025-10-10 10:12:02.006 2 INFO nova.compute.manager [-] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Took 0.94 seconds to deallocate network for instance.
Oct 10 10:12:02 compute-1 nova_compute[235132]: 2025-10-10 10:12:02.078 2 DEBUG oslo_concurrency.lockutils [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:12:02 compute-1 nova_compute[235132]: 2025-10-10 10:12:02.080 2 DEBUG oslo_concurrency.lockutils [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:12:02 compute-1 nova_compute[235132]: 2025-10-10 10:12:02.106 2 DEBUG nova.network.neutron [req-4a519e4a-e9dd-4f5f-b0ea-f9569588b5cd req-bda9577c-483e-444e-878a-9bb48c503849 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Updated VIF entry in instance network info cache for port 3281ffe2-3fe8-4217-bcda-e7f8c55f5dae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 10 10:12:02 compute-1 nova_compute[235132]: 2025-10-10 10:12:02.107 2 DEBUG nova.network.neutron [req-4a519e4a-e9dd-4f5f-b0ea-f9569588b5cd req-bda9577c-483e-444e-878a-9bb48c503849 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Updating instance_info_cache with network_info: [{"id": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "address": "fa:16:3e:f6:78:9d", "network": {"id": "bc8bfbd1-b5ac-42d3-b24d-baf38dabaf11", "bridge": "br-int", "label": "tempest-network-smoke--1013272438", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3281ffe2-3f", "ovs_interfaceid": "3281ffe2-3fe8-4217-bcda-e7f8c55f5dae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:12:02 compute-1 nova_compute[235132]: 2025-10-10 10:12:02.134 2 DEBUG oslo_concurrency.lockutils [req-4a519e4a-e9dd-4f5f-b0ea-f9569588b5cd req-bda9577c-483e-444e-878a-9bb48c503849 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-b8379f65-91e0-45a5-a245-a1bc27260f20" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:12:02 compute-1 nova_compute[235132]: 2025-10-10 10:12:02.146 2 DEBUG oslo_concurrency.processutils [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:12:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:02.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:02 compute-1 nova_compute[235132]: 2025-10-10 10:12:02.233 2 DEBUG nova.compute.manager [req-8e92e726-f99c-4ca8-9564-294575408ad2 req-295c8ee1-2b0b-4ec1-b783-bc5c5f646df9 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Received event network-vif-deleted-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:12:02 compute-1 nova_compute[235132]: 2025-10-10 10:12:02.234 2 INFO nova.compute.manager [req-8e92e726-f99c-4ca8-9564-294575408ad2 req-295c8ee1-2b0b-4ec1-b783-bc5c5f646df9 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Neutron deleted interface 3281ffe2-3fe8-4217-bcda-e7f8c55f5dae; detaching it from the instance and deleting it from the info cache
Oct 10 10:12:02 compute-1 nova_compute[235132]: 2025-10-10 10:12:02.234 2 DEBUG nova.network.neutron [req-8e92e726-f99c-4ca8-9564-294575408ad2 req-295c8ee1-2b0b-4ec1-b783-bc5c5f646df9 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:12:02 compute-1 ceph-mon[79167]: pgmap v782: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Oct 10 10:12:02 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:12:02 compute-1 nova_compute[235132]: 2025-10-10 10:12:02.262 2 DEBUG nova.compute.manager [req-8e92e726-f99c-4ca8-9564-294575408ad2 req-295c8ee1-2b0b-4ec1-b783-bc5c5f646df9 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Detach interface failed, port_id=3281ffe2-3fe8-4217-bcda-e7f8c55f5dae, reason: Instance b8379f65-91e0-45a5-a245-a1bc27260f20 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Oct 10 10:12:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:12:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:12:02 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2472464452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:12:02 compute-1 nova_compute[235132]: 2025-10-10 10:12:02.665 2 DEBUG oslo_concurrency.processutils [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:12:02 compute-1 nova_compute[235132]: 2025-10-10 10:12:02.670 2 DEBUG nova.compute.provider_tree [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:12:02 compute-1 nova_compute[235132]: 2025-10-10 10:12:02.685 2 DEBUG nova.scheduler.client.report [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:12:02 compute-1 nova_compute[235132]: 2025-10-10 10:12:02.706 2 DEBUG oslo_concurrency.lockutils [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:12:02 compute-1 nova_compute[235132]: 2025-10-10 10:12:02.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:02 compute-1 nova_compute[235132]: 2025-10-10 10:12:02.746 2 INFO nova.scheduler.client.report [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Deleted allocations for instance b8379f65-91e0-45a5-a245-a1bc27260f20
Oct 10 10:12:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:02 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:02 compute-1 nova_compute[235132]: 2025-10-10 10:12:02.811 2 DEBUG oslo_concurrency.lockutils [None req-90259828-06a4-49b7-b779-0cd6847018e6 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "b8379f65-91e0-45a5-a245-a1bc27260f20" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:12:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:03 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340003d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:03 compute-1 nova_compute[235132]: 2025-10-10 10:12:03.047 2 DEBUG nova.compute.manager [req-7385c265-388e-4ebd-9815-d52978886b08 req-640e2cd9-2ad4-4198-a277-549d57da8fcb 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Received event network-vif-plugged-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:12:03 compute-1 nova_compute[235132]: 2025-10-10 10:12:03.048 2 DEBUG oslo_concurrency.lockutils [req-7385c265-388e-4ebd-9815-d52978886b08 req-640e2cd9-2ad4-4198-a277-549d57da8fcb 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:12:03 compute-1 nova_compute[235132]: 2025-10-10 10:12:03.049 2 DEBUG oslo_concurrency.lockutils [req-7385c265-388e-4ebd-9815-d52978886b08 req-640e2cd9-2ad4-4198-a277-549d57da8fcb 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:12:03 compute-1 nova_compute[235132]: 2025-10-10 10:12:03.049 2 DEBUG oslo_concurrency.lockutils [req-7385c265-388e-4ebd-9815-d52978886b08 req-640e2cd9-2ad4-4198-a277-549d57da8fcb 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "b8379f65-91e0-45a5-a245-a1bc27260f20-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:12:03 compute-1 nova_compute[235132]: 2025-10-10 10:12:03.049 2 DEBUG nova.compute.manager [req-7385c265-388e-4ebd-9815-d52978886b08 req-640e2cd9-2ad4-4198-a277-549d57da8fcb 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] No waiting events found dispatching network-vif-plugged-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:12:03 compute-1 nova_compute[235132]: 2025-10-10 10:12:03.050 2 WARNING nova.compute.manager [req-7385c265-388e-4ebd-9815-d52978886b08 req-640e2cd9-2ad4-4198-a277-549d57da8fcb 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Received unexpected event network-vif-plugged-3281ffe2-3fe8-4217-bcda-e7f8c55f5dae for instance with vm_state deleted and task_state None.
Oct 10 10:12:03 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2472464452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:12:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:03 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53200044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:03.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:04.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:04 compute-1 ceph-mon[79167]: pgmap v783: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 16 KiB/s wr, 56 op/s
Oct 10 10:12:04 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:04 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003e30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:05 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:05 compute-1 ceph-mon[79167]: pgmap v784: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Oct 10 10:12:05 compute-1 nova_compute[235132]: 2025-10-10 10:12:05.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:05 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340003d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:12:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:05.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:12:06 compute-1 nova_compute[235132]: 2025-10-10 10:12:06.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:06.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:06 compute-1 nova_compute[235132]: 2025-10-10 10:12:06.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:06 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53200044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:07 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:12:07 compute-1 nova_compute[235132]: 2025-10-10 10:12:07.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:07 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:12:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:07.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:12:07 compute-1 ceph-mon[79167]: pgmap v785: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Oct 10 10:12:07 compute-1 podman[239724]: 2025-10-10 10:12:07.970265 +0000 UTC m=+0.065785769 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 10 10:12:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:08.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:08 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340003d30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:09 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53200044e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:09 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:12:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:09.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:12:09 compute-1 ceph-mon[79167]: pgmap v786: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 3.3 KiB/s wr, 56 op/s
Oct 10 10:12:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:12:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:10.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:12:10 compute-1 nova_compute[235132]: 2025-10-10 10:12:10.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:10 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:11 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:11 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c0014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:11.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:11 compute-1 ceph-mon[79167]: pgmap v787: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 10 10:12:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:12.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:12:12 compute-1 nova_compute[235132]: 2025-10-10 10:12:12.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:12 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:13 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:13 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:13.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:13 compute-1 ceph-mon[79167]: pgmap v788: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 10 10:12:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:14.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c0014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:15 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:15 compute-1 nova_compute[235132]: 2025-10-10 10:12:15.479 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760091120.4778812, b8379f65-91e0-45a5-a245-a1bc27260f20 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:12:15 compute-1 nova_compute[235132]: 2025-10-10 10:12:15.479 2 INFO nova.compute.manager [-] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] VM Stopped (Lifecycle Event)
Oct 10 10:12:15 compute-1 nova_compute[235132]: 2025-10-10 10:12:15.511 2 DEBUG nova.compute.manager [None req-d8b4fdf1-fb3d-4678-9158-ff523158894b - - - - - -] [instance: b8379f65-91e0-45a5-a245-a1bc27260f20] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:12:15 compute-1 nova_compute[235132]: 2025-10-10 10:12:15.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:15 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:12:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:15.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:12:16 compute-1 ceph-mon[79167]: pgmap v789: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:12:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:12:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:16.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:12:16 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:16 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:17 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:12:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:17 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c0014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:12:17 compute-1 nova_compute[235132]: 2025-10-10 10:12:17.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:17 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:17 compute-1 sudo[239753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:12:17 compute-1 sudo[239753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:12:17 compute-1 sudo[239753]: pam_unix(sudo:session): session closed for user root
Oct 10 10:12:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:17.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:18 compute-1 ceph-mon[79167]: pgmap v790: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:12:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:18.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:18 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:19 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:19 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c0014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:12:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:19.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:12:20 compute-1 ceph-mon[79167]: pgmap v791: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 10 10:12:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:20.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:20 compute-1 nova_compute[235132]: 2025-10-10 10:12:20.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:20 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:21 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:21 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:21.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:21 compute-1 podman[239780]: 2025-10-10 10:12:21.896950252 +0000 UTC m=+0.089249001 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 10 10:12:21 compute-1 podman[239781]: 2025-10-10 10:12:21.914412999 +0000 UTC m=+0.094999128 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:12:22 compute-1 ceph-mon[79167]: pgmap v792: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:12:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:12:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:22.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:12:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:12:22 compute-1 sudo[239820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:12:22 compute-1 sudo[239820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:12:22 compute-1 sudo[239820]: pam_unix(sudo:session): session closed for user root
Oct 10 10:12:22 compute-1 nova_compute[235132]: 2025-10-10 10:12:22.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:22 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:22 compute-1 sudo[239845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:12:22 compute-1 sudo[239845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:12:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:23 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:23 compute-1 sudo[239845]: pam_unix(sudo:session): session closed for user root
Oct 10 10:12:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:23 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c0014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:12:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:23.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:12:24 compute-1 podman[239902]: 2025-10-10 10:12:24.018900437 +0000 UTC m=+0.113253537 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 10 10:12:24 compute-1 ceph-mon[79167]: pgmap v793: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:12:24 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:12:24 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:12:24 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:12:24 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:12:24 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:12:24 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:12:24 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:12:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:24.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:24 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:24 compute-1 nova_compute[235132]: 2025-10-10 10:12:24.856 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "2fe2b257-7e1f-46c2-aed9-0593c533e290" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:12:24 compute-1 nova_compute[235132]: 2025-10-10 10:12:24.857 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:12:24 compute-1 nova_compute[235132]: 2025-10-10 10:12:24.878 2 DEBUG nova.compute.manager [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 10 10:12:24 compute-1 nova_compute[235132]: 2025-10-10 10:12:24.961 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:12:24 compute-1 nova_compute[235132]: 2025-10-10 10:12:24.961 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:12:24 compute-1 nova_compute[235132]: 2025-10-10 10:12:24.971 2 DEBUG nova.virt.hardware [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 10 10:12:24 compute-1 nova_compute[235132]: 2025-10-10 10:12:24.971 2 INFO nova.compute.claims [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Claim successful on node compute-1.ctlplane.example.com
Oct 10 10:12:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:25 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:25 compute-1 nova_compute[235132]: 2025-10-10 10:12:25.080 2 DEBUG oslo_concurrency.processutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:12:25 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:12:25 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4121639621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:12:25 compute-1 nova_compute[235132]: 2025-10-10 10:12:25.573 2 DEBUG oslo_concurrency.processutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:12:25 compute-1 nova_compute[235132]: 2025-10-10 10:12:25.581 2 DEBUG nova.compute.provider_tree [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:12:25 compute-1 nova_compute[235132]: 2025-10-10 10:12:25.602 2 DEBUG nova.scheduler.client.report [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:12:25 compute-1 nova_compute[235132]: 2025-10-10 10:12:25.633 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:12:25 compute-1 nova_compute[235132]: 2025-10-10 10:12:25.636 2 DEBUG nova.compute.manager [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 10 10:12:25 compute-1 nova_compute[235132]: 2025-10-10 10:12:25.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:25 compute-1 nova_compute[235132]: 2025-10-10 10:12:25.724 2 DEBUG nova.compute.manager [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 10 10:12:25 compute-1 nova_compute[235132]: 2025-10-10 10:12:25.725 2 DEBUG nova.network.neutron [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 10 10:12:25 compute-1 nova_compute[235132]: 2025-10-10 10:12:25.746 2 INFO nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 10 10:12:25 compute-1 nova_compute[235132]: 2025-10-10 10:12:25.763 2 DEBUG nova.compute.manager [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 10 10:12:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:25 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c003880 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:25.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:25 compute-1 nova_compute[235132]: 2025-10-10 10:12:25.934 2 DEBUG nova.compute.manager [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 10 10:12:25 compute-1 nova_compute[235132]: 2025-10-10 10:12:25.937 2 DEBUG nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 10 10:12:25 compute-1 nova_compute[235132]: 2025-10-10 10:12:25.937 2 INFO nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Creating image(s)
Oct 10 10:12:25 compute-1 nova_compute[235132]: 2025-10-10 10:12:25.973 2 DEBUG nova.storage.rbd_utils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 2fe2b257-7e1f-46c2-aed9-0593c533e290_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:12:26 compute-1 nova_compute[235132]: 2025-10-10 10:12:26.004 2 DEBUG nova.storage.rbd_utils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 2fe2b257-7e1f-46c2-aed9-0593c533e290_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:12:26 compute-1 nova_compute[235132]: 2025-10-10 10:12:26.031 2 DEBUG nova.storage.rbd_utils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 2fe2b257-7e1f-46c2-aed9-0593c533e290_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:12:26 compute-1 nova_compute[235132]: 2025-10-10 10:12:26.035 2 DEBUG oslo_concurrency.processutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:12:26 compute-1 ceph-mon[79167]: pgmap v794: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:12:26 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/4121639621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:12:26 compute-1 nova_compute[235132]: 2025-10-10 10:12:26.124 2 DEBUG oslo_concurrency.processutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:12:26 compute-1 nova_compute[235132]: 2025-10-10 10:12:26.125 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:12:26 compute-1 nova_compute[235132]: 2025-10-10 10:12:26.126 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:12:26 compute-1 nova_compute[235132]: 2025-10-10 10:12:26.126 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:12:26 compute-1 nova_compute[235132]: 2025-10-10 10:12:26.154 2 DEBUG nova.storage.rbd_utils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 2fe2b257-7e1f-46c2-aed9-0593c533e290_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:12:26 compute-1 nova_compute[235132]: 2025-10-10 10:12:26.159 2 DEBUG oslo_concurrency.processutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 2fe2b257-7e1f-46c2-aed9-0593c533e290_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:12:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:26.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:26 compute-1 nova_compute[235132]: 2025-10-10 10:12:26.482 2 DEBUG oslo_concurrency.processutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 2fe2b257-7e1f-46c2-aed9-0593c533e290_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.323s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:12:26 compute-1 nova_compute[235132]: 2025-10-10 10:12:26.583 2 DEBUG nova.storage.rbd_utils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] resizing rbd image 2fe2b257-7e1f-46c2-aed9-0593c533e290_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 10 10:12:26 compute-1 nova_compute[235132]: 2025-10-10 10:12:26.719 2 DEBUG nova.objects.instance [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'migration_context' on Instance uuid 2fe2b257-7e1f-46c2-aed9-0593c533e290 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:12:26 compute-1 nova_compute[235132]: 2025-10-10 10:12:26.740 2 DEBUG nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 10 10:12:26 compute-1 nova_compute[235132]: 2025-10-10 10:12:26.740 2 DEBUG nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Ensure instance console log exists: /var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 10 10:12:26 compute-1 nova_compute[235132]: 2025-10-10 10:12:26.741 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:12:26 compute-1 nova_compute[235132]: 2025-10-10 10:12:26.742 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:12:26 compute-1 nova_compute[235132]: 2025-10-10 10:12:26.742 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:12:26 compute-1 nova_compute[235132]: 2025-10-10 10:12:26.746 2 DEBUG nova.policy [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7956778c03764aaf8906c9b435337976', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 10 10:12:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:26 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c0014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:27 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/44782465' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:12:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/44782465' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:12:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:12:27 compute-1 nova_compute[235132]: 2025-10-10 10:12:27.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:27 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314003ff0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:27.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:28 compute-1 ceph-mon[79167]: pgmap v795: 353 pgs: 353 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 10 10:12:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:28.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:28 compute-1 sudo[240118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:12:28 compute-1 sudo[240118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:12:28 compute-1 sudo[240118]: pam_unix(sudo:session): session closed for user root
Oct 10 10:12:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:28 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0038a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:29 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c001670 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:29 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:12:29 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:12:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:29 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:29.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:29 compute-1 nova_compute[235132]: 2025-10-10 10:12:29.930 2 DEBUG nova.network.neutron [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Successfully created port: eb2cd434-444d-4138-bbe8-948bf47d3986 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 10 10:12:30 compute-1 ceph-mon[79167]: pgmap v796: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 10 10:12:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:12:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:30.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:12:30 compute-1 nova_compute[235132]: 2025-10-10 10:12:30.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:30 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:30 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314004190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:31 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0038c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:31 compute-1 ceph-mon[79167]: pgmap v797: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 10 10:12:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:31 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:12:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:31.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:12:31 compute-1 nova_compute[235132]: 2025-10-10 10:12:31.949 2 DEBUG nova.network.neutron [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Successfully updated port: eb2cd434-444d-4138-bbe8-948bf47d3986 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 10 10:12:31 compute-1 nova_compute[235132]: 2025-10-10 10:12:31.965 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:12:31 compute-1 nova_compute[235132]: 2025-10-10 10:12:31.965 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquired lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:12:31 compute-1 nova_compute[235132]: 2025-10-10 10:12:31.965 2 DEBUG nova.network.neutron [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 10 10:12:32 compute-1 nova_compute[235132]: 2025-10-10 10:12:32.083 2 DEBUG nova.compute.manager [req-9a815d1d-fe02-48af-9799-331d70a691c3 req-dbeb87ed-c94e-41eb-993a-a646b4277c7e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received event network-changed-eb2cd434-444d-4138-bbe8-948bf47d3986 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:12:32 compute-1 nova_compute[235132]: 2025-10-10 10:12:32.084 2 DEBUG nova.compute.manager [req-9a815d1d-fe02-48af-9799-331d70a691c3 req-dbeb87ed-c94e-41eb-993a-a646b4277c7e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Refreshing instance network info cache due to event network-changed-eb2cd434-444d-4138-bbe8-948bf47d3986. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 10 10:12:32 compute-1 nova_compute[235132]: 2025-10-10 10:12:32.084 2 DEBUG oslo_concurrency.lockutils [req-9a815d1d-fe02-48af-9799-331d70a691c3 req-dbeb87ed-c94e-41eb-993a-a646b4277c7e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:12:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:32.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:32 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:12:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:12:32 compute-1 nova_compute[235132]: 2025-10-10 10:12:32.708 2 DEBUG nova.network.neutron [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 10 10:12:32 compute-1 nova_compute[235132]: 2025-10-10 10:12:32.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:32 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:33 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53140041b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:33 compute-1 ceph-mon[79167]: pgmap v798: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 10 10:12:33 compute-1 nova_compute[235132]: 2025-10-10 10:12:33.430 2 DEBUG nova.network.neutron [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Updating instance_info_cache with network_info: [{"id": "eb2cd434-444d-4138-bbe8-948bf47d3986", "address": "fa:16:3e:8b:9e:3d", "network": {"id": "c1ba46b2-7e02-4d4f-b296-3e1e1f027d22", "bridge": "br-int", "label": "tempest-network-smoke--2006106686", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2cd434-44", "ovs_interfaceid": "eb2cd434-444d-4138-bbe8-948bf47d3986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:12:33 compute-1 nova_compute[235132]: 2025-10-10 10:12:33.455 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Releasing lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:12:33 compute-1 nova_compute[235132]: 2025-10-10 10:12:33.456 2 DEBUG nova.compute.manager [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Instance network_info: |[{"id": "eb2cd434-444d-4138-bbe8-948bf47d3986", "address": "fa:16:3e:8b:9e:3d", "network": {"id": "c1ba46b2-7e02-4d4f-b296-3e1e1f027d22", "bridge": "br-int", "label": "tempest-network-smoke--2006106686", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2cd434-44", "ovs_interfaceid": "eb2cd434-444d-4138-bbe8-948bf47d3986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 10 10:12:33 compute-1 nova_compute[235132]: 2025-10-10 10:12:33.456 2 DEBUG oslo_concurrency.lockutils [req-9a815d1d-fe02-48af-9799-331d70a691c3 req-dbeb87ed-c94e-41eb-993a-a646b4277c7e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:12:33 compute-1 nova_compute[235132]: 2025-10-10 10:12:33.456 2 DEBUG nova.network.neutron [req-9a815d1d-fe02-48af-9799-331d70a691c3 req-dbeb87ed-c94e-41eb-993a-a646b4277c7e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Refreshing network info cache for port eb2cd434-444d-4138-bbe8-948bf47d3986 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 10 10:12:33 compute-1 nova_compute[235132]: 2025-10-10 10:12:33.461 2 DEBUG nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Start _get_guest_xml network_info=[{"id": "eb2cd434-444d-4138-bbe8-948bf47d3986", "address": "fa:16:3e:8b:9e:3d", "network": {"id": "c1ba46b2-7e02-4d4f-b296-3e1e1f027d22", "bridge": "br-int", "label": "tempest-network-smoke--2006106686", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2cd434-44", "ovs_interfaceid": "eb2cd434-444d-4138-bbe8-948bf47d3986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-10T10:09:50Z,direct_url=<?>,disk_format='qcow2',id=5ae78700-970d-45b4-a57d-978a054c7519,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ec962e275689437d80680ff3ea69c852',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-10T10:09:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'boot_index': 0, 'guest_format': None, 'image_id': '5ae78700-970d-45b4-a57d-978a054c7519'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 10 10:12:33 compute-1 nova_compute[235132]: 2025-10-10 10:12:33.471 2 WARNING nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:12:33 compute-1 nova_compute[235132]: 2025-10-10 10:12:33.480 2 DEBUG nova.virt.libvirt.host [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 10 10:12:33 compute-1 nova_compute[235132]: 2025-10-10 10:12:33.482 2 DEBUG nova.virt.libvirt.host [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 10 10:12:33 compute-1 nova_compute[235132]: 2025-10-10 10:12:33.487 2 DEBUG nova.virt.libvirt.host [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 10 10:12:33 compute-1 nova_compute[235132]: 2025-10-10 10:12:33.487 2 DEBUG nova.virt.libvirt.host [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 10 10:12:33 compute-1 nova_compute[235132]: 2025-10-10 10:12:33.488 2 DEBUG nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 10 10:12:33 compute-1 nova_compute[235132]: 2025-10-10 10:12:33.489 2 DEBUG nova.virt.hardware [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-10T10:09:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='00373e71-6208-4238-ad85-db0452c53bc6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-10T10:09:50Z,direct_url=<?>,disk_format='qcow2',id=5ae78700-970d-45b4-a57d-978a054c7519,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ec962e275689437d80680ff3ea69c852',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-10T10:09:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 10 10:12:33 compute-1 nova_compute[235132]: 2025-10-10 10:12:33.490 2 DEBUG nova.virt.hardware [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 10 10:12:33 compute-1 nova_compute[235132]: 2025-10-10 10:12:33.490 2 DEBUG nova.virt.hardware [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 10 10:12:33 compute-1 nova_compute[235132]: 2025-10-10 10:12:33.491 2 DEBUG nova.virt.hardware [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 10 10:12:33 compute-1 nova_compute[235132]: 2025-10-10 10:12:33.491 2 DEBUG nova.virt.hardware [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 10 10:12:33 compute-1 nova_compute[235132]: 2025-10-10 10:12:33.492 2 DEBUG nova.virt.hardware [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 10 10:12:33 compute-1 nova_compute[235132]: 2025-10-10 10:12:33.492 2 DEBUG nova.virt.hardware [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 10 10:12:33 compute-1 nova_compute[235132]: 2025-10-10 10:12:33.493 2 DEBUG nova.virt.hardware [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 10 10:12:33 compute-1 nova_compute[235132]: 2025-10-10 10:12:33.493 2 DEBUG nova.virt.hardware [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 10 10:12:33 compute-1 nova_compute[235132]: 2025-10-10 10:12:33.494 2 DEBUG nova.virt.hardware [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 10 10:12:33 compute-1 nova_compute[235132]: 2025-10-10 10:12:33.494 2 DEBUG nova.virt.hardware [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 10 10:12:33 compute-1 nova_compute[235132]: 2025-10-10 10:12:33.499 2 DEBUG oslo_concurrency.processutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:12:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:33 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c0038e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:12:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:33.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:12:33 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 10 10:12:33 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3163061778' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:12:33 compute-1 nova_compute[235132]: 2025-10-10 10:12:33.993 2 DEBUG oslo_concurrency.processutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.028 2 DEBUG nova.storage.rbd_utils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 2fe2b257-7e1f-46c2-aed9-0593c533e290_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.032 2 DEBUG oslo_concurrency.processutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:12:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:12:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:34.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:12:34 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3163061778' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:12:34 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 10 10:12:34 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/543711216' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.478 2 DEBUG oslo_concurrency.processutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.480 2 DEBUG nova.virt.libvirt.vif [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-10T10:12:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1167416058',display_name='tempest-TestNetworkBasicOps-server-1167416058',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1167416058',id=3,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMTGlkGkqxsntvfgB83G1SiWbnvW7dUHWa7mCt3Bc6Si4+/bYujJEPULms51s1qxf1oywpdTq7/IqVAkQU+odhhf7e0wmrvEXEJnRsSzZiDWmk6FDUoWkd1hV01XsyivgQ==',key_name='tempest-TestNetworkBasicOps-919848835',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-jd07fe8o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-10T10:12:25Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=2fe2b257-7e1f-46c2-aed9-0593c533e290,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eb2cd434-444d-4138-bbe8-948bf47d3986", "address": "fa:16:3e:8b:9e:3d", "network": {"id": "c1ba46b2-7e02-4d4f-b296-3e1e1f027d22", "bridge": "br-int", "label": "tempest-network-smoke--2006106686", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2cd434-44", "ovs_interfaceid": "eb2cd434-444d-4138-bbe8-948bf47d3986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.480 2 DEBUG nova.network.os_vif_util [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "eb2cd434-444d-4138-bbe8-948bf47d3986", "address": "fa:16:3e:8b:9e:3d", "network": {"id": "c1ba46b2-7e02-4d4f-b296-3e1e1f027d22", "bridge": "br-int", "label": "tempest-network-smoke--2006106686", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2cd434-44", "ovs_interfaceid": "eb2cd434-444d-4138-bbe8-948bf47d3986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.481 2 DEBUG nova.network.os_vif_util [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:9e:3d,bridge_name='br-int',has_traffic_filtering=True,id=eb2cd434-444d-4138-bbe8-948bf47d3986,network=Network(c1ba46b2-7e02-4d4f-b296-3e1e1f027d22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb2cd434-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.483 2 DEBUG nova.objects.instance [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2fe2b257-7e1f-46c2-aed9-0593c533e290 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.500 2 DEBUG nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] End _get_guest_xml xml=<domain type="kvm">
Oct 10 10:12:34 compute-1 nova_compute[235132]:   <uuid>2fe2b257-7e1f-46c2-aed9-0593c533e290</uuid>
Oct 10 10:12:34 compute-1 nova_compute[235132]:   <name>instance-00000003</name>
Oct 10 10:12:34 compute-1 nova_compute[235132]:   <memory>131072</memory>
Oct 10 10:12:34 compute-1 nova_compute[235132]:   <vcpu>1</vcpu>
Oct 10 10:12:34 compute-1 nova_compute[235132]:   <metadata>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 10:12:34 compute-1 nova_compute[235132]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:       <nova:name>tempest-TestNetworkBasicOps-server-1167416058</nova:name>
Oct 10 10:12:34 compute-1 nova_compute[235132]:       <nova:creationTime>2025-10-10 10:12:33</nova:creationTime>
Oct 10 10:12:34 compute-1 nova_compute[235132]:       <nova:flavor name="m1.nano">
Oct 10 10:12:34 compute-1 nova_compute[235132]:         <nova:memory>128</nova:memory>
Oct 10 10:12:34 compute-1 nova_compute[235132]:         <nova:disk>1</nova:disk>
Oct 10 10:12:34 compute-1 nova_compute[235132]:         <nova:swap>0</nova:swap>
Oct 10 10:12:34 compute-1 nova_compute[235132]:         <nova:ephemeral>0</nova:ephemeral>
Oct 10 10:12:34 compute-1 nova_compute[235132]:         <nova:vcpus>1</nova:vcpus>
Oct 10 10:12:34 compute-1 nova_compute[235132]:       </nova:flavor>
Oct 10 10:12:34 compute-1 nova_compute[235132]:       <nova:owner>
Oct 10 10:12:34 compute-1 nova_compute[235132]:         <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 10:12:34 compute-1 nova_compute[235132]:         <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 10:12:34 compute-1 nova_compute[235132]:       </nova:owner>
Oct 10 10:12:34 compute-1 nova_compute[235132]:       <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:       <nova:ports>
Oct 10 10:12:34 compute-1 nova_compute[235132]:         <nova:port uuid="eb2cd434-444d-4138-bbe8-948bf47d3986">
Oct 10 10:12:34 compute-1 nova_compute[235132]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:         </nova:port>
Oct 10 10:12:34 compute-1 nova_compute[235132]:       </nova:ports>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     </nova:instance>
Oct 10 10:12:34 compute-1 nova_compute[235132]:   </metadata>
Oct 10 10:12:34 compute-1 nova_compute[235132]:   <sysinfo type="smbios">
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <system>
Oct 10 10:12:34 compute-1 nova_compute[235132]:       <entry name="manufacturer">RDO</entry>
Oct 10 10:12:34 compute-1 nova_compute[235132]:       <entry name="product">OpenStack Compute</entry>
Oct 10 10:12:34 compute-1 nova_compute[235132]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 10:12:34 compute-1 nova_compute[235132]:       <entry name="serial">2fe2b257-7e1f-46c2-aed9-0593c533e290</entry>
Oct 10 10:12:34 compute-1 nova_compute[235132]:       <entry name="uuid">2fe2b257-7e1f-46c2-aed9-0593c533e290</entry>
Oct 10 10:12:34 compute-1 nova_compute[235132]:       <entry name="family">Virtual Machine</entry>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     </system>
Oct 10 10:12:34 compute-1 nova_compute[235132]:   </sysinfo>
Oct 10 10:12:34 compute-1 nova_compute[235132]:   <os>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <boot dev="hd"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <smbios mode="sysinfo"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:   </os>
Oct 10 10:12:34 compute-1 nova_compute[235132]:   <features>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <acpi/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <apic/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <vmcoreinfo/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:   </features>
Oct 10 10:12:34 compute-1 nova_compute[235132]:   <clock offset="utc">
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <timer name="pit" tickpolicy="delay"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <timer name="hpet" present="no"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:   </clock>
Oct 10 10:12:34 compute-1 nova_compute[235132]:   <cpu mode="host-model" match="exact">
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <topology sockets="1" cores="1" threads="1"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:   </cpu>
Oct 10 10:12:34 compute-1 nova_compute[235132]:   <devices>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <disk type="network" device="disk">
Oct 10 10:12:34 compute-1 nova_compute[235132]:       <driver type="raw" cache="none"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:       <source protocol="rbd" name="vms/2fe2b257-7e1f-46c2-aed9-0593c533e290_disk">
Oct 10 10:12:34 compute-1 nova_compute[235132]:         <host name="192.168.122.100" port="6789"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:         <host name="192.168.122.102" port="6789"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:         <host name="192.168.122.101" port="6789"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:       </source>
Oct 10 10:12:34 compute-1 nova_compute[235132]:       <auth username="openstack">
Oct 10 10:12:34 compute-1 nova_compute[235132]:         <secret type="ceph" uuid="21f084a3-af34-5230-afe4-ea5cd24a55f4"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:       </auth>
Oct 10 10:12:34 compute-1 nova_compute[235132]:       <target dev="vda" bus="virtio"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     </disk>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <disk type="network" device="cdrom">
Oct 10 10:12:34 compute-1 nova_compute[235132]:       <driver type="raw" cache="none"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:       <source protocol="rbd" name="vms/2fe2b257-7e1f-46c2-aed9-0593c533e290_disk.config">
Oct 10 10:12:34 compute-1 nova_compute[235132]:         <host name="192.168.122.100" port="6789"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:         <host name="192.168.122.102" port="6789"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:         <host name="192.168.122.101" port="6789"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:       </source>
Oct 10 10:12:34 compute-1 nova_compute[235132]:       <auth username="openstack">
Oct 10 10:12:34 compute-1 nova_compute[235132]:         <secret type="ceph" uuid="21f084a3-af34-5230-afe4-ea5cd24a55f4"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:       </auth>
Oct 10 10:12:34 compute-1 nova_compute[235132]:       <target dev="sda" bus="sata"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     </disk>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <interface type="ethernet">
Oct 10 10:12:34 compute-1 nova_compute[235132]:       <mac address="fa:16:3e:8b:9e:3d"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:       <model type="virtio"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:       <driver name="vhost" rx_queue_size="512"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:       <mtu size="1442"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:       <target dev="tapeb2cd434-44"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     </interface>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <serial type="pty">
Oct 10 10:12:34 compute-1 nova_compute[235132]:       <log file="/var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290/console.log" append="off"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     </serial>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <video>
Oct 10 10:12:34 compute-1 nova_compute[235132]:       <model type="virtio"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     </video>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <input type="tablet" bus="usb"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <rng model="virtio">
Oct 10 10:12:34 compute-1 nova_compute[235132]:       <backend model="random">/dev/urandom</backend>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     </rng>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <controller type="usb" index="0"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     <memballoon model="virtio">
Oct 10 10:12:34 compute-1 nova_compute[235132]:       <stats period="10"/>
Oct 10 10:12:34 compute-1 nova_compute[235132]:     </memballoon>
Oct 10 10:12:34 compute-1 nova_compute[235132]:   </devices>
Oct 10 10:12:34 compute-1 nova_compute[235132]: </domain>
Oct 10 10:12:34 compute-1 nova_compute[235132]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.502 2 DEBUG nova.compute.manager [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Preparing to wait for external event network-vif-plugged-eb2cd434-444d-4138-bbe8-948bf47d3986 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.502 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.502 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.503 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.503 2 DEBUG nova.virt.libvirt.vif [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-10T10:12:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1167416058',display_name='tempest-TestNetworkBasicOps-server-1167416058',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1167416058',id=3,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMTGlkGkqxsntvfgB83G1SiWbnvW7dUHWa7mCt3Bc6Si4+/bYujJEPULms51s1qxf1oywpdTq7/IqVAkQU+odhhf7e0wmrvEXEJnRsSzZiDWmk6FDUoWkd1hV01XsyivgQ==',key_name='tempest-TestNetworkBasicOps-919848835',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-jd07fe8o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-10T10:12:25Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=2fe2b257-7e1f-46c2-aed9-0593c533e290,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eb2cd434-444d-4138-bbe8-948bf47d3986", "address": "fa:16:3e:8b:9e:3d", "network": {"id": "c1ba46b2-7e02-4d4f-b296-3e1e1f027d22", "bridge": "br-int", "label": "tempest-network-smoke--2006106686", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2cd434-44", "ovs_interfaceid": "eb2cd434-444d-4138-bbe8-948bf47d3986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.504 2 DEBUG nova.network.os_vif_util [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "eb2cd434-444d-4138-bbe8-948bf47d3986", "address": "fa:16:3e:8b:9e:3d", "network": {"id": "c1ba46b2-7e02-4d4f-b296-3e1e1f027d22", "bridge": "br-int", "label": "tempest-network-smoke--2006106686", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2cd434-44", "ovs_interfaceid": "eb2cd434-444d-4138-bbe8-948bf47d3986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.504 2 DEBUG nova.network.os_vif_util [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:9e:3d,bridge_name='br-int',has_traffic_filtering=True,id=eb2cd434-444d-4138-bbe8-948bf47d3986,network=Network(c1ba46b2-7e02-4d4f-b296-3e1e1f027d22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb2cd434-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.505 2 DEBUG os_vif [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:9e:3d,bridge_name='br-int',has_traffic_filtering=True,id=eb2cd434-444d-4138-bbe8-948bf47d3986,network=Network(c1ba46b2-7e02-4d4f-b296-3e1e1f027d22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb2cd434-44') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.506 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.506 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.509 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeb2cd434-44, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.510 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapeb2cd434-44, col_values=(('external_ids', {'iface-id': 'eb2cd434-444d-4138-bbe8-948bf47d3986', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8b:9e:3d', 'vm-uuid': '2fe2b257-7e1f-46c2-aed9-0593c533e290'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:34 compute-1 NetworkManager[44982]: <info>  [1760091154.5134] manager: (tapeb2cd434-44): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.520 2 INFO os_vif [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:9e:3d,bridge_name='br-int',has_traffic_filtering=True,id=eb2cd434-444d-4138-bbe8-948bf47d3986,network=Network(c1ba46b2-7e02-4d4f-b296-3e1e1f027d22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb2cd434-44')
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.595 2 DEBUG nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.596 2 DEBUG nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.596 2 DEBUG nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No VIF found with MAC fa:16:3e:8b:9e:3d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.597 2 INFO nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Using config drive
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.629 2 DEBUG nova.storage.rbd_utils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 2fe2b257-7e1f-46c2-aed9-0593c533e290_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.656 2 DEBUG nova.network.neutron [req-9a815d1d-fe02-48af-9799-331d70a691c3 req-dbeb87ed-c94e-41eb-993a-a646b4277c7e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Updated VIF entry in instance network info cache for port eb2cd434-444d-4138-bbe8-948bf47d3986. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.657 2 DEBUG nova.network.neutron [req-9a815d1d-fe02-48af-9799-331d70a691c3 req-dbeb87ed-c94e-41eb-993a-a646b4277c7e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Updating instance_info_cache with network_info: [{"id": "eb2cd434-444d-4138-bbe8-948bf47d3986", "address": "fa:16:3e:8b:9e:3d", "network": {"id": "c1ba46b2-7e02-4d4f-b296-3e1e1f027d22", "bridge": "br-int", "label": "tempest-network-smoke--2006106686", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2cd434-44", "ovs_interfaceid": "eb2cd434-444d-4138-bbe8-948bf47d3986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.671 2 DEBUG oslo_concurrency.lockutils [req-9a815d1d-fe02-48af-9799-331d70a691c3 req-dbeb87ed-c94e-41eb-993a-a646b4277c7e 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:12:34 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:34 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.939 2 INFO nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Creating config drive at /var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290/disk.config
Oct 10 10:12:34 compute-1 nova_compute[235132]: 2025-10-10 10:12:34.944 2 DEBUG oslo_concurrency.processutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa3xnsivg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:12:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/101235 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:12:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:35 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:35 compute-1 nova_compute[235132]: 2025-10-10 10:12:35.084 2 DEBUG oslo_concurrency.processutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa3xnsivg" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:12:35 compute-1 nova_compute[235132]: 2025-10-10 10:12:35.118 2 DEBUG nova.storage.rbd_utils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 2fe2b257-7e1f-46c2-aed9-0593c533e290_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:12:35 compute-1 nova_compute[235132]: 2025-10-10 10:12:35.122 2 DEBUG oslo_concurrency.processutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290/disk.config 2fe2b257-7e1f-46c2-aed9-0593c533e290_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:12:35 compute-1 nova_compute[235132]: 2025-10-10 10:12:35.297 2 DEBUG oslo_concurrency.processutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290/disk.config 2fe2b257-7e1f-46c2-aed9-0593c533e290_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:12:35 compute-1 nova_compute[235132]: 2025-10-10 10:12:35.298 2 INFO nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Deleting local config drive /var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290/disk.config because it was imported into RBD.
Oct 10 10:12:35 compute-1 kernel: tapeb2cd434-44: entered promiscuous mode
Oct 10 10:12:35 compute-1 nova_compute[235132]: 2025-10-10 10:12:35.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:35 compute-1 NetworkManager[44982]: <info>  [1760091155.3645] manager: (tapeb2cd434-44): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Oct 10 10:12:35 compute-1 ovn_controller[131749]: 2025-10-10T10:12:35Z|00038|binding|INFO|Claiming lport eb2cd434-444d-4138-bbe8-948bf47d3986 for this chassis.
Oct 10 10:12:35 compute-1 ovn_controller[131749]: 2025-10-10T10:12:35Z|00039|binding|INFO|eb2cd434-444d-4138-bbe8-948bf47d3986: Claiming fa:16:3e:8b:9e:3d 10.100.0.6
Oct 10 10:12:35 compute-1 nova_compute[235132]: 2025-10-10 10:12:35.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:35 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/543711216' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:12:35 compute-1 ceph-mon[79167]: pgmap v799: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 10 10:12:35 compute-1 nova_compute[235132]: 2025-10-10 10:12:35.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.393 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:9e:3d 10.100.0.6'], port_security=['fa:16:3e:8b:9e:3d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2fe2b257-7e1f-46c2-aed9-0593c533e290', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b2e1b849-99bd-43fd-883d-af1bb6750e12', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86b59927-b11d-4637-a561-9adc673cffb1, chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], logical_port=eb2cd434-444d-4138-bbe8-948bf47d3986) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.394 141156 INFO neutron.agent.ovn.metadata.agent [-] Port eb2cd434-444d-4138-bbe8-948bf47d3986 in datapath c1ba46b2-7e02-4d4f-b296-3e1e1f027d22 bound to our chassis
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.395 141156 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c1ba46b2-7e02-4d4f-b296-3e1e1f027d22
Oct 10 10:12:35 compute-1 systemd-machined[191637]: New machine qemu-2-instance-00000003.
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.410 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[166c95a6-e362-4498-9832-d13a20485f48]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.411 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc1ba46b2-71 in ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.413 238898 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc1ba46b2-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.413 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[834e2cc3-6453-43f3-8938-ddba2e9dcc1b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.414 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[986f01b5-db40-4b0d-869f-d021f6cf4417]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.428 141275 DEBUG oslo.privsep.daemon [-] privsep: reply[c816487e-cf13-4c4c-adc0-5c7bb8d2714a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:12:35 compute-1 systemd[1]: Started Virtual Machine qemu-2-instance-00000003.
Oct 10 10:12:35 compute-1 nova_compute[235132]: 2025-10-10 10:12:35.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:35 compute-1 ovn_controller[131749]: 2025-10-10T10:12:35Z|00040|binding|INFO|Setting lport eb2cd434-444d-4138-bbe8-948bf47d3986 ovn-installed in OVS
Oct 10 10:12:35 compute-1 ovn_controller[131749]: 2025-10-10T10:12:35Z|00041|binding|INFO|Setting lport eb2cd434-444d-4138-bbe8-948bf47d3986 up in Southbound
Oct 10 10:12:35 compute-1 systemd-udevd[240285]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 10:12:35 compute-1 nova_compute[235132]: 2025-10-10 10:12:35.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.455 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[3ce694e2-04b5-4d05-99b9-14cd99674019]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:12:35 compute-1 NetworkManager[44982]: <info>  [1760091155.4632] device (tapeb2cd434-44): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 10:12:35 compute-1 NetworkManager[44982]: <info>  [1760091155.4643] device (tapeb2cd434-44): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.491 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[9f3ca832-f657-4042-8a41-0a316cadd8e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.496 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[b684cba0-49e1-4e5e-98ab-7eed8fbdbe09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:12:35 compute-1 NetworkManager[44982]: <info>  [1760091155.4991] manager: (tapc1ba46b2-70): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.538 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[aaa65ed9-3842-40f3-9b96-1ea2481a4c6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.540 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[ef06f6e4-caad-4a4b-a143-1810d84b4a6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:12:35 compute-1 NetworkManager[44982]: <info>  [1760091155.5672] device (tapc1ba46b2-70): carrier: link connected
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.572 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[7298c66f-988f-491e-890d-1e1b846383df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.586 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[4668eaae-3ab4-453f-afc9-2f1e319043aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc1ba46b2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:28:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 406927, 'reachable_time': 16782, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240315, 'error': None, 'target': 'ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.598 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[3bb501f7-d268-4b2b-933c-f984b4592923]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1f:28ad'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 406927, 'tstamp': 406927}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240316, 'error': None, 'target': 'ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.618 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[ca7dfc84-7364-4362-ba97-910cd8f1cd43]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc1ba46b2-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1f:28:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 406927, 'reachable_time': 16782, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 240317, 'error': None, 'target': 'ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.652 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[16a97e01-de09-4cd0-91e9-fa22e3d3fe51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.724 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[982c9f8f-33dd-480c-b1f8-6f22295513ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.726 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1ba46b2-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.726 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.727 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc1ba46b2-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:12:35 compute-1 NetworkManager[44982]: <info>  [1760091155.7296] manager: (tapc1ba46b2-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Oct 10 10:12:35 compute-1 kernel: tapc1ba46b2-70: entered promiscuous mode
Oct 10 10:12:35 compute-1 nova_compute[235132]: 2025-10-10 10:12:35.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.733 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc1ba46b2-70, col_values=(('external_ids', {'iface-id': 'ca6a8c9e-7d4d-4ccb-aa3e-a02bb6dd0c01'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:12:35 compute-1 ovn_controller[131749]: 2025-10-10T10:12:35Z|00042|binding|INFO|Releasing lport ca6a8c9e-7d4d-4ccb-aa3e-a02bb6dd0c01 from this chassis (sb_readonly=0)
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.736 141156 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c1ba46b2-7e02-4d4f-b296-3e1e1f027d22.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c1ba46b2-7e02-4d4f-b296-3e1e1f027d22.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.736 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[085c5b08-0f4a-427d-8f07-07c64e30b819]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.737 141156 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]: global
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]:     log         /dev/log local0 debug
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]:     log-tag     haproxy-metadata-proxy-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]:     user        root
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]:     group       root
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]:     maxconn     1024
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]:     pidfile     /var/lib/neutron/external/pids/c1ba46b2-7e02-4d4f-b296-3e1e1f027d22.pid.haproxy
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]:     daemon
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]: 
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]: defaults
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]:     log global
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]:     mode http
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]:     option httplog
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]:     option dontlognull
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]:     option http-server-close
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]:     option forwardfor
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]:     retries                 3
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]:     timeout http-request    30s
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]:     timeout connect         30s
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]:     timeout client          32s
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]:     timeout server          32s
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]:     timeout http-keep-alive 30s
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]: 
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]: 
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]: listen listener
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]:     bind 169.254.169.254:80
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]:     server metadata /var/lib/neutron/metadata_proxy
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]:     http-request add-header X-OVN-Network-ID c1ba46b2-7e02-4d4f-b296-3e1e1f027d22
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 10 10:12:35 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:35.738 141156 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22', 'env', 'PROCESS_TAG=haproxy-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c1ba46b2-7e02-4d4f-b296-3e1e1f027d22.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 10 10:12:35 compute-1 nova_compute[235132]: 2025-10-10 10:12:35.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:35 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53140041d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:35 compute-1 nova_compute[235132]: 2025-10-10 10:12:35.900 2 DEBUG nova.compute.manager [req-3ff8559a-3c91-4ee8-ad69-49859e984c1d req-228d662c-09a5-4bff-bfb6-d57616aeb6cd 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received event network-vif-plugged-eb2cd434-444d-4138-bbe8-948bf47d3986 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:12:35 compute-1 nova_compute[235132]: 2025-10-10 10:12:35.901 2 DEBUG oslo_concurrency.lockutils [req-3ff8559a-3c91-4ee8-ad69-49859e984c1d req-228d662c-09a5-4bff-bfb6-d57616aeb6cd 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:12:35 compute-1 nova_compute[235132]: 2025-10-10 10:12:35.902 2 DEBUG oslo_concurrency.lockutils [req-3ff8559a-3c91-4ee8-ad69-49859e984c1d req-228d662c-09a5-4bff-bfb6-d57616aeb6cd 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:12:35 compute-1 nova_compute[235132]: 2025-10-10 10:12:35.902 2 DEBUG oslo_concurrency.lockutils [req-3ff8559a-3c91-4ee8-ad69-49859e984c1d req-228d662c-09a5-4bff-bfb6-d57616aeb6cd 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:12:35 compute-1 nova_compute[235132]: 2025-10-10 10:12:35.903 2 DEBUG nova.compute.manager [req-3ff8559a-3c91-4ee8-ad69-49859e984c1d req-228d662c-09a5-4bff-bfb6-d57616aeb6cd 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Processing event network-vif-plugged-eb2cd434-444d-4138-bbe8-948bf47d3986 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 10 10:12:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:35.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:36 compute-1 nova_compute[235132]: 2025-10-10 10:12:36.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:36 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:36.014 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:12:36 compute-1 podman[240392]: 2025-10-10 10:12:36.183988548 +0000 UTC m=+0.059825547 container create a01570374548315ef7bf6635fa67f6eb7a97cd3857a5c57a88d956949567c026 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 10:12:36 compute-1 systemd[1]: Started libpod-conmon-a01570374548315ef7bf6635fa67f6eb7a97cd3857a5c57a88d956949567c026.scope.
Oct 10 10:12:36 compute-1 podman[240392]: 2025-10-10 10:12:36.15664218 +0000 UTC m=+0.032479219 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 10 10:12:36 compute-1 systemd[1]: Started libcrun container.
Oct 10 10:12:36 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f454a5f6c0eb08c56ed00e9648965604ea84ac6e2edf2652dc6afe6afb2c063/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 10:12:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:12:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:36.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:12:36 compute-1 podman[240392]: 2025-10-10 10:12:36.299827825 +0000 UTC m=+0.175664874 container init a01570374548315ef7bf6635fa67f6eb7a97cd3857a5c57a88d956949567c026 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct 10 10:12:36 compute-1 podman[240392]: 2025-10-10 10:12:36.309508329 +0000 UTC m=+0.185345358 container start a01570374548315ef7bf6635fa67f6eb7a97cd3857a5c57a88d956949567c026 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 10 10:12:36 compute-1 neutron-haproxy-ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22[240407]: [NOTICE]   (240411) : New worker (240413) forked
Oct 10 10:12:36 compute-1 neutron-haproxy-ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22[240407]: [NOTICE]   (240411) : Loading success.
Oct 10 10:12:36 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:36.358 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 10 10:12:36 compute-1 nova_compute[235132]: 2025-10-10 10:12:36.555 2 DEBUG nova.virt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Emitting event <LifecycleEvent: 1760091156.554304, 2fe2b257-7e1f-46c2-aed9-0593c533e290 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:12:36 compute-1 nova_compute[235132]: 2025-10-10 10:12:36.555 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] VM Started (Lifecycle Event)
Oct 10 10:12:36 compute-1 nova_compute[235132]: 2025-10-10 10:12:36.559 2 DEBUG nova.compute.manager [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 10 10:12:36 compute-1 nova_compute[235132]: 2025-10-10 10:12:36.565 2 DEBUG nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 10 10:12:36 compute-1 nova_compute[235132]: 2025-10-10 10:12:36.570 2 INFO nova.virt.libvirt.driver [-] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Instance spawned successfully.
Oct 10 10:12:36 compute-1 nova_compute[235132]: 2025-10-10 10:12:36.570 2 DEBUG nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 10 10:12:36 compute-1 nova_compute[235132]: 2025-10-10 10:12:36.580 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:12:36 compute-1 nova_compute[235132]: 2025-10-10 10:12:36.586 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 10 10:12:36 compute-1 nova_compute[235132]: 2025-10-10 10:12:36.598 2 DEBUG nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:12:36 compute-1 nova_compute[235132]: 2025-10-10 10:12:36.599 2 DEBUG nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:12:36 compute-1 nova_compute[235132]: 2025-10-10 10:12:36.600 2 DEBUG nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:12:36 compute-1 nova_compute[235132]: 2025-10-10 10:12:36.601 2 DEBUG nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:12:36 compute-1 nova_compute[235132]: 2025-10-10 10:12:36.601 2 DEBUG nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:12:36 compute-1 nova_compute[235132]: 2025-10-10 10:12:36.602 2 DEBUG nova.virt.libvirt.driver [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:12:36 compute-1 nova_compute[235132]: 2025-10-10 10:12:36.614 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 10 10:12:36 compute-1 nova_compute[235132]: 2025-10-10 10:12:36.614 2 DEBUG nova.virt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Emitting event <LifecycleEvent: 1760091156.554571, 2fe2b257-7e1f-46c2-aed9-0593c533e290 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:12:36 compute-1 nova_compute[235132]: 2025-10-10 10:12:36.615 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] VM Paused (Lifecycle Event)
Oct 10 10:12:36 compute-1 nova_compute[235132]: 2025-10-10 10:12:36.647 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:12:36 compute-1 nova_compute[235132]: 2025-10-10 10:12:36.651 2 DEBUG nova.virt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Emitting event <LifecycleEvent: 1760091156.5638149, 2fe2b257-7e1f-46c2-aed9-0593c533e290 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:12:36 compute-1 nova_compute[235132]: 2025-10-10 10:12:36.652 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] VM Resumed (Lifecycle Event)
Oct 10 10:12:36 compute-1 nova_compute[235132]: 2025-10-10 10:12:36.673 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:12:36 compute-1 nova_compute[235132]: 2025-10-10 10:12:36.677 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 10 10:12:36 compute-1 nova_compute[235132]: 2025-10-10 10:12:36.684 2 INFO nova.compute.manager [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Took 10.75 seconds to spawn the instance on the hypervisor.
Oct 10 10:12:36 compute-1 nova_compute[235132]: 2025-10-10 10:12:36.685 2 DEBUG nova.compute.manager [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:12:36 compute-1 nova_compute[235132]: 2025-10-10 10:12:36.695 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 10 10:12:36 compute-1 nova_compute[235132]: 2025-10-10 10:12:36.745 2 INFO nova.compute.manager [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Took 11.82 seconds to build instance.
Oct 10 10:12:36 compute-1 nova_compute[235132]: 2025-10-10 10:12:36.760 2 DEBUG oslo_concurrency.lockutils [None req-ea8117d9-bb4b-4af6-af16-27730a87e441 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.903s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:12:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:36 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c003900 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:37 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:12:37 compute-1 nova_compute[235132]: 2025-10-10 10:12:37.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:37 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:37.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:37 compute-1 sudo[240423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:12:37 compute-1 sudo[240423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:12:37 compute-1 sudo[240423]: pam_unix(sudo:session): session closed for user root
Oct 10 10:12:37 compute-1 ceph-mon[79167]: pgmap v800: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 10 10:12:37 compute-1 nova_compute[235132]: 2025-10-10 10:12:37.978 2 DEBUG nova.compute.manager [req-ce91cebe-5c7e-40f9-b450-a31d1ba9ea9e req-f9f8a40e-3ad8-4f01-93a2-df9e312bf1b5 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received event network-vif-plugged-eb2cd434-444d-4138-bbe8-948bf47d3986 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:12:37 compute-1 nova_compute[235132]: 2025-10-10 10:12:37.979 2 DEBUG oslo_concurrency.lockutils [req-ce91cebe-5c7e-40f9-b450-a31d1ba9ea9e req-f9f8a40e-3ad8-4f01-93a2-df9e312bf1b5 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:12:37 compute-1 nova_compute[235132]: 2025-10-10 10:12:37.980 2 DEBUG oslo_concurrency.lockutils [req-ce91cebe-5c7e-40f9-b450-a31d1ba9ea9e req-f9f8a40e-3ad8-4f01-93a2-df9e312bf1b5 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:12:37 compute-1 nova_compute[235132]: 2025-10-10 10:12:37.980 2 DEBUG oslo_concurrency.lockutils [req-ce91cebe-5c7e-40f9-b450-a31d1ba9ea9e req-f9f8a40e-3ad8-4f01-93a2-df9e312bf1b5 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:12:37 compute-1 nova_compute[235132]: 2025-10-10 10:12:37.980 2 DEBUG nova.compute.manager [req-ce91cebe-5c7e-40f9-b450-a31d1ba9ea9e req-f9f8a40e-3ad8-4f01-93a2-df9e312bf1b5 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] No waiting events found dispatching network-vif-plugged-eb2cd434-444d-4138-bbe8-948bf47d3986 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:12:37 compute-1 nova_compute[235132]: 2025-10-10 10:12:37.981 2 WARNING nova.compute.manager [req-ce91cebe-5c7e-40f9-b450-a31d1ba9ea9e req-f9f8a40e-3ad8-4f01-93a2-df9e312bf1b5 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received unexpected event network-vif-plugged-eb2cd434-444d-4138-bbe8-948bf47d3986 for instance with vm_state active and task_state None.
Oct 10 10:12:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:12:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:38.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:12:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:38 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53140041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:38 compute-1 podman[240448]: 2025-10-10 10:12:38.960416977 +0000 UTC m=+0.063977061 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 10 10:12:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:39 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c003920 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:39 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:39.361 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ee0899c1-415d-4aa8-abe8-1240b4e8bf2c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:12:39 compute-1 ovn_controller[131749]: 2025-10-10T10:12:39Z|00043|binding|INFO|Releasing lport ca6a8c9e-7d4d-4ccb-aa3e-a02bb6dd0c01 from this chassis (sb_readonly=0)
Oct 10 10:12:39 compute-1 nova_compute[235132]: 2025-10-10 10:12:39.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:39 compute-1 NetworkManager[44982]: <info>  [1760091159.5327] manager: (patch-br-int-to-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Oct 10 10:12:39 compute-1 NetworkManager[44982]: <info>  [1760091159.5338] manager: (patch-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Oct 10 10:12:39 compute-1 nova_compute[235132]: 2025-10-10 10:12:39.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:39 compute-1 ovn_controller[131749]: 2025-10-10T10:12:39Z|00044|binding|INFO|Releasing lport ca6a8c9e-7d4d-4ccb-aa3e-a02bb6dd0c01 from this chassis (sb_readonly=0)
Oct 10 10:12:39 compute-1 nova_compute[235132]: 2025-10-10 10:12:39.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:39 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:39.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:39 compute-1 ceph-mon[79167]: pgmap v801: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 710 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Oct 10 10:12:40 compute-1 nova_compute[235132]: 2025-10-10 10:12:40.108 2 DEBUG nova.compute.manager [req-31bcc133-a84d-427e-8b80-db2215b1462f req-5f563421-89a9-4ca9-b2f6-a165de1fcb72 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received event network-changed-eb2cd434-444d-4138-bbe8-948bf47d3986 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:12:40 compute-1 nova_compute[235132]: 2025-10-10 10:12:40.109 2 DEBUG nova.compute.manager [req-31bcc133-a84d-427e-8b80-db2215b1462f req-5f563421-89a9-4ca9-b2f6-a165de1fcb72 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Refreshing instance network info cache due to event network-changed-eb2cd434-444d-4138-bbe8-948bf47d3986. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 10 10:12:40 compute-1 nova_compute[235132]: 2025-10-10 10:12:40.109 2 DEBUG oslo_concurrency.lockutils [req-31bcc133-a84d-427e-8b80-db2215b1462f req-5f563421-89a9-4ca9-b2f6-a165de1fcb72 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:12:40 compute-1 nova_compute[235132]: 2025-10-10 10:12:40.110 2 DEBUG oslo_concurrency.lockutils [req-31bcc133-a84d-427e-8b80-db2215b1462f req-5f563421-89a9-4ca9-b2f6-a165de1fcb72 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:12:40 compute-1 nova_compute[235132]: 2025-10-10 10:12:40.110 2 DEBUG nova.network.neutron [req-31bcc133-a84d-427e-8b80-db2215b1462f req-5f563421-89a9-4ca9-b2f6-a165de1fcb72 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Refreshing network info cache for port eb2cd434-444d-4138-bbe8-948bf47d3986 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 10 10:12:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:12:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:40.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:12:40 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:40 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:41 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314004210 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:41 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c003940 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:12:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:41.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:12:42 compute-1 ceph-mon[79167]: pgmap v802: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 693 KiB/s rd, 12 KiB/s wr, 33 op/s
Oct 10 10:12:42 compute-1 nova_compute[235132]: 2025-10-10 10:12:42.145 2 DEBUG nova.network.neutron [req-31bcc133-a84d-427e-8b80-db2215b1462f req-5f563421-89a9-4ca9-b2f6-a165de1fcb72 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Updated VIF entry in instance network info cache for port eb2cd434-444d-4138-bbe8-948bf47d3986. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 10 10:12:42 compute-1 nova_compute[235132]: 2025-10-10 10:12:42.146 2 DEBUG nova.network.neutron [req-31bcc133-a84d-427e-8b80-db2215b1462f req-5f563421-89a9-4ca9-b2f6-a165de1fcb72 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Updating instance_info_cache with network_info: [{"id": "eb2cd434-444d-4138-bbe8-948bf47d3986", "address": "fa:16:3e:8b:9e:3d", "network": {"id": "c1ba46b2-7e02-4d4f-b296-3e1e1f027d22", "bridge": "br-int", "label": "tempest-network-smoke--2006106686", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2cd434-44", "ovs_interfaceid": "eb2cd434-444d-4138-bbe8-948bf47d3986", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:12:42 compute-1 nova_compute[235132]: 2025-10-10 10:12:42.165 2 DEBUG oslo_concurrency.lockutils [req-31bcc133-a84d-427e-8b80-db2215b1462f req-5f563421-89a9-4ca9-b2f6-a165de1fcb72 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:12:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:42.206 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:12:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:42.207 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:12:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:12:42.208 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:12:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:12:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:42.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:12:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:12:42 compute-1 nova_compute[235132]: 2025-10-10 10:12:42.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:42 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:42 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:43 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:43 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:12:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:43.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:12:44 compute-1 ceph-mon[79167]: pgmap v803: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct 10 10:12:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:44 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:12:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:44.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:44 compute-1 nova_compute[235132]: 2025-10-10 10:12:44.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:44 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c003940 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:45 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:45 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:12:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:45.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:12:46 compute-1 ceph-mon[79167]: pgmap v804: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct 10 10:12:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:12:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:46.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:12:46 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:46 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:47 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:12:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:47 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c003940 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:47 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:12:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:47 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:12:47 compute-1 unix_chkpwd[240475]: password check failed for user (root)
Oct 10 10:12:47 compute-1 sshd-session[240473]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.159  user=root
Oct 10 10:12:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:12:47 compute-1 nova_compute[235132]: 2025-10-10 10:12:47.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:47 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:47.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:48 compute-1 ceph-mon[79167]: pgmap v805: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct 10 10:12:48 compute-1 nova_compute[235132]: 2025-10-10 10:12:48.063 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:12:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:48.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:48 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:49 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:49 compute-1 ovn_controller[131749]: 2025-10-10T10:12:49Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8b:9e:3d 10.100.0.6
Oct 10 10:12:49 compute-1 ovn_controller[131749]: 2025-10-10T10:12:49Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8b:9e:3d 10.100.0.6
Oct 10 10:12:49 compute-1 nova_compute[235132]: 2025-10-10 10:12:49.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:49 compute-1 sshd-session[240473]: Failed password for root from 193.46.255.159 port 18412 ssh2
Oct 10 10:12:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:49 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c003940 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:49.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:50 compute-1 nova_compute[235132]: 2025-10-10 10:12:50.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:12:50 compute-1 ceph-mon[79167]: pgmap v806: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Oct 10 10:12:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:50 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:12:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:12:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:50.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:12:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:50 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:51 compute-1 nova_compute[235132]: 2025-10-10 10:12:51.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:12:51 compute-1 nova_compute[235132]: 2025-10-10 10:12:51.044 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:12:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:51 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:51 compute-1 unix_chkpwd[240479]: password check failed for user (root)
Oct 10 10:12:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:51 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:51.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:52 compute-1 nova_compute[235132]: 2025-10-10 10:12:52.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:12:52 compute-1 nova_compute[235132]: 2025-10-10 10:12:52.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:12:52 compute-1 nova_compute[235132]: 2025-10-10 10:12:52.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:12:52 compute-1 ceph-mon[79167]: pgmap v807: 353 pgs: 353 active+clean; 88 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 937 B/s wr, 44 op/s
Oct 10 10:12:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:52.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:12:52 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:52 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c003940 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:52 compute-1 nova_compute[235132]: 2025-10-10 10:12:52.855 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:12:52 compute-1 nova_compute[235132]: 2025-10-10 10:12:52.856 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquired lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:12:52 compute-1 nova_compute[235132]: 2025-10-10 10:12:52.856 2 DEBUG nova.network.neutron [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 10 10:12:52 compute-1 nova_compute[235132]: 2025-10-10 10:12:52.856 2 DEBUG nova.objects.instance [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2fe2b257-7e1f-46c2-aed9-0593c533e290 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:12:52 compute-1 nova_compute[235132]: 2025-10-10 10:12:52.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:52 compute-1 podman[240481]: 2025-10-10 10:12:52.970633193 +0000 UTC m=+0.067079385 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 10 10:12:52 compute-1 podman[240480]: 2025-10-10 10:12:52.993539769 +0000 UTC m=+0.088909722 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:12:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:53 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:53 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1175062627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:12:53 compute-1 sshd-session[240473]: Failed password for root from 193.46.255.159 port 18412 ssh2
Oct 10 10:12:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:53 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:53.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:53 compute-1 nova_compute[235132]: 2025-10-10 10:12:53.939 2 DEBUG nova.network.neutron [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Updating instance_info_cache with network_info: [{"id": "eb2cd434-444d-4138-bbe8-948bf47d3986", "address": "fa:16:3e:8b:9e:3d", "network": {"id": "c1ba46b2-7e02-4d4f-b296-3e1e1f027d22", "bridge": "br-int", "label": "tempest-network-smoke--2006106686", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2cd434-44", "ovs_interfaceid": "eb2cd434-444d-4138-bbe8-948bf47d3986", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:12:53 compute-1 nova_compute[235132]: 2025-10-10 10:12:53.967 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Releasing lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:12:53 compute-1 nova_compute[235132]: 2025-10-10 10:12:53.967 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 10 10:12:53 compute-1 nova_compute[235132]: 2025-10-10 10:12:53.968 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:12:53 compute-1 nova_compute[235132]: 2025-10-10 10:12:53.968 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:12:53 compute-1 nova_compute[235132]: 2025-10-10 10:12:53.968 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:12:53 compute-1 nova_compute[235132]: 2025-10-10 10:12:53.968 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:12:53 compute-1 nova_compute[235132]: 2025-10-10 10:12:53.989 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:12:53 compute-1 nova_compute[235132]: 2025-10-10 10:12:53.989 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:12:53 compute-1 nova_compute[235132]: 2025-10-10 10:12:53.990 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:12:53 compute-1 nova_compute[235132]: 2025-10-10 10:12:53.990 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:12:53 compute-1 nova_compute[235132]: 2025-10-10 10:12:53.990 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:12:54 compute-1 unix_chkpwd[240525]: password check failed for user (root)
Oct 10 10:12:54 compute-1 ceph-mon[79167]: pgmap v808: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Oct 10 10:12:54 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/4031376606' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:12:54 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/881918671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:12:54 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1943454956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:12:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:54.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:54 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:12:54 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1031806664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:12:54 compute-1 nova_compute[235132]: 2025-10-10 10:12:54.445 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:12:54 compute-1 nova_compute[235132]: 2025-10-10 10:12:54.518 2 DEBUG nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 10 10:12:54 compute-1 nova_compute[235132]: 2025-10-10 10:12:54.519 2 DEBUG nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 10 10:12:54 compute-1 nova_compute[235132]: 2025-10-10 10:12:54.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:54 compute-1 nova_compute[235132]: 2025-10-10 10:12:54.550 2 INFO nova.compute.manager [None req-52db11d7-5279-4395-931e-77d5220cbede 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Get console output
Oct 10 10:12:54 compute-1 nova_compute[235132]: 2025-10-10 10:12:54.555 631 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 10 10:12:54 compute-1 podman[240546]: 2025-10-10 10:12:54.625017486 +0000 UTC m=+0.124204217 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:12:54 compute-1 nova_compute[235132]: 2025-10-10 10:12:54.754 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:12:54 compute-1 nova_compute[235132]: 2025-10-10 10:12:54.755 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4734MB free_disk=59.94288635253906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:12:54 compute-1 nova_compute[235132]: 2025-10-10 10:12:54.755 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:12:54 compute-1 nova_compute[235132]: 2025-10-10 10:12:54.756 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:12:54 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:54 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:54 compute-1 nova_compute[235132]: 2025-10-10 10:12:54.829 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Instance 2fe2b257-7e1f-46c2-aed9-0593c533e290 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 10 10:12:54 compute-1 nova_compute[235132]: 2025-10-10 10:12:54.830 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:12:54 compute-1 nova_compute[235132]: 2025-10-10 10:12:54.830 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:12:54 compute-1 nova_compute[235132]: 2025-10-10 10:12:54.882 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:12:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:55 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c003940 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:55 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1031806664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:12:55 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:12:55 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2515670112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:12:55 compute-1 nova_compute[235132]: 2025-10-10 10:12:55.367 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:12:55 compute-1 nova_compute[235132]: 2025-10-10 10:12:55.373 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:12:55 compute-1 nova_compute[235132]: 2025-10-10 10:12:55.392 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:12:55 compute-1 nova_compute[235132]: 2025-10-10 10:12:55.418 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:12:55 compute-1 nova_compute[235132]: 2025-10-10 10:12:55.419 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:12:55 compute-1 nova_compute[235132]: 2025-10-10 10:12:55.494 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:12:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:55 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:55.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:56 compute-1 ceph-mon[79167]: pgmap v809: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 10 10:12:56 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2515670112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:12:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:56.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:56 compute-1 sshd-session[240473]: Failed password for root from 193.46.255.159 port 18412 ssh2
Oct 10 10:12:56 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:56 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/101257 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:12:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:57 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:12:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:57 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c003940 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:57 compute-1 nova_compute[235132]: 2025-10-10 10:12:57.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:57.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:58 compute-1 sudo[240597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:12:58 compute-1 sudo[240597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:12:58 compute-1 sudo[240597]: pam_unix(sudo:session): session closed for user root
Oct 10 10:12:58 compute-1 ceph-mon[79167]: pgmap v810: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 10 10:12:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:12:58.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:12:58 compute-1 sshd-session[240473]: Received disconnect from 193.46.255.159 port 18412:11:  [preauth]
Oct 10 10:12:58 compute-1 sshd-session[240473]: Disconnected from authenticating user root 193.46.255.159 port 18412 [preauth]
Oct 10 10:12:58 compute-1 sshd-session[240473]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.159  user=root
Oct 10 10:12:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:58 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:59 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:59 compute-1 unix_chkpwd[240625]: password check failed for user (root)
Oct 10 10:12:59 compute-1 sshd-session[240622]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.159  user=root
Oct 10 10:12:59 compute-1 nova_compute[235132]: 2025-10-10 10:12:59.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:12:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:12:59 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:12:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:12:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:12:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:12:59.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:00 compute-1 ceph-mon[79167]: pgmap v811: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct 10 10:13:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:13:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:00.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:13:00 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:00 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c003940 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:01 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:01 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:01.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:02 compute-1 sshd-session[240622]: Failed password for root from 193.46.255.159 port 23852 ssh2
Oct 10 10:13:02 compute-1 ceph-mon[79167]: pgmap v812: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 10 10:13:02 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:13:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:13:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:02.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:13:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:13:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:02 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:02 compute-1 nova_compute[235132]: 2025-10-10 10:13:02.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:02 compute-1 nova_compute[235132]: 2025-10-10 10:13:02.955 2 DEBUG oslo_concurrency.lockutils [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "interface-2fe2b257-7e1f-46c2-aed9-0593c533e290-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:13:02 compute-1 nova_compute[235132]: 2025-10-10 10:13:02.956 2 DEBUG oslo_concurrency.lockutils [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "interface-2fe2b257-7e1f-46c2-aed9-0593c533e290-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:13:02 compute-1 nova_compute[235132]: 2025-10-10 10:13:02.957 2 DEBUG nova.objects.instance [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'flavor' on Instance uuid 2fe2b257-7e1f-46c2-aed9-0593c533e290 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:13:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:03 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:03 compute-1 nova_compute[235132]: 2025-10-10 10:13:03.751 2 DEBUG nova.objects.instance [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'pci_requests' on Instance uuid 2fe2b257-7e1f-46c2-aed9-0593c533e290 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:13:03 compute-1 nova_compute[235132]: 2025-10-10 10:13:03.772 2 DEBUG nova.network.neutron [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 10 10:13:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:03 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340001b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:03 compute-1 unix_chkpwd[240628]: password check failed for user (root)
Oct 10 10:13:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:13:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:03.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:13:03 compute-1 nova_compute[235132]: 2025-10-10 10:13:03.988 2 DEBUG nova.policy [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7956778c03764aaf8906c9b435337976', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 10 10:13:04 compute-1 ceph-mon[79167]: pgmap v813: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:13:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:04.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:04 compute-1 nova_compute[235132]: 2025-10-10 10:13:04.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:04 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:04 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:04 compute-1 nova_compute[235132]: 2025-10-10 10:13:04.876 2 DEBUG nova.network.neutron [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Successfully created port: 9ea527cd-71d7-4979-bef2-4cbe7f0038cf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 10 10:13:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:05 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:05 compute-1 nova_compute[235132]: 2025-10-10 10:13:05.736 2 DEBUG nova.network.neutron [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Successfully updated port: 9ea527cd-71d7-4979-bef2-4cbe7f0038cf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 10 10:13:05 compute-1 sshd-session[240622]: Failed password for root from 193.46.255.159 port 23852 ssh2
Oct 10 10:13:05 compute-1 nova_compute[235132]: 2025-10-10 10:13:05.759 2 DEBUG oslo_concurrency.lockutils [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:13:05 compute-1 nova_compute[235132]: 2025-10-10 10:13:05.760 2 DEBUG oslo_concurrency.lockutils [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquired lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:13:05 compute-1 nova_compute[235132]: 2025-10-10 10:13:05.760 2 DEBUG nova.network.neutron [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 10 10:13:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:05 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:05 compute-1 nova_compute[235132]: 2025-10-10 10:13:05.864 2 DEBUG nova.compute.manager [req-81069918-9143-4e13-b82f-629d695d2a1c req-fbb21ddf-7861-4b13-92e6-a89d18e73b73 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received event network-changed-9ea527cd-71d7-4979-bef2-4cbe7f0038cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:13:05 compute-1 nova_compute[235132]: 2025-10-10 10:13:05.865 2 DEBUG nova.compute.manager [req-81069918-9143-4e13-b82f-629d695d2a1c req-fbb21ddf-7861-4b13-92e6-a89d18e73b73 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Refreshing instance network info cache due to event network-changed-9ea527cd-71d7-4979-bef2-4cbe7f0038cf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 10 10:13:05 compute-1 nova_compute[235132]: 2025-10-10 10:13:05.865 2 DEBUG oslo_concurrency.lockutils [req-81069918-9143-4e13-b82f-629d695d2a1c req-fbb21ddf-7861-4b13-92e6-a89d18e73b73 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:13:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:13:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:05.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:13:06 compute-1 unix_chkpwd[240630]: password check failed for user (root)
Oct 10 10:13:06 compute-1 ceph-mon[79167]: pgmap v814: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 937 B/s rd, 14 KiB/s wr, 1 op/s
Oct 10 10:13:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:06.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:06 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:07 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:13:07 compute-1 nova_compute[235132]: 2025-10-10 10:13:07.766 2 DEBUG nova.network.neutron [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Updating instance_info_cache with network_info: [{"id": "eb2cd434-444d-4138-bbe8-948bf47d3986", "address": "fa:16:3e:8b:9e:3d", "network": {"id": "c1ba46b2-7e02-4d4f-b296-3e1e1f027d22", "bridge": "br-int", "label": "tempest-network-smoke--2006106686", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2cd434-44", "ovs_interfaceid": "eb2cd434-444d-4138-bbe8-948bf47d3986", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "address": "fa:16:3e:33:d2:11", "network": {"id": "2d451f14-1551-484b-9a8f-b854ec5a8acc", "bridge": "br-int", "label": "tempest-network-smoke--1627200920", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ea527cd-71", "ovs_interfaceid": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:13:07 compute-1 nova_compute[235132]: 2025-10-10 10:13:07.788 2 DEBUG oslo_concurrency.lockutils [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Releasing lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:13:07 compute-1 nova_compute[235132]: 2025-10-10 10:13:07.789 2 DEBUG oslo_concurrency.lockutils [req-81069918-9143-4e13-b82f-629d695d2a1c req-fbb21ddf-7861-4b13-92e6-a89d18e73b73 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:13:07 compute-1 nova_compute[235132]: 2025-10-10 10:13:07.790 2 DEBUG nova.network.neutron [req-81069918-9143-4e13-b82f-629d695d2a1c req-fbb21ddf-7861-4b13-92e6-a89d18e73b73 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Refreshing network info cache for port 9ea527cd-71d7-4979-bef2-4cbe7f0038cf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 10 10:13:07 compute-1 nova_compute[235132]: 2025-10-10 10:13:07.795 2 DEBUG nova.virt.libvirt.vif [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:12:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1167416058',display_name='tempest-TestNetworkBasicOps-server-1167416058',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1167416058',id=3,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMTGlkGkqxsntvfgB83G1SiWbnvW7dUHWa7mCt3Bc6Si4+/bYujJEPULms51s1qxf1oywpdTq7/IqVAkQU+odhhf7e0wmrvEXEJnRsSzZiDWmk6FDUoWkd1hV01XsyivgQ==',key_name='tempest-TestNetworkBasicOps-919848835',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:12:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-jd07fe8o',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:12:36Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=2fe2b257-7e1f-46c2-aed9-0593c533e290,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "address": "fa:16:3e:33:d2:11", "network": {"id": "2d451f14-1551-484b-9a8f-b854ec5a8acc", "bridge": "br-int", "label": "tempest-network-smoke--1627200920", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ea527cd-71", "ovs_interfaceid": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 10 10:13:07 compute-1 nova_compute[235132]: 2025-10-10 10:13:07.796 2 DEBUG nova.network.os_vif_util [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "address": "fa:16:3e:33:d2:11", "network": {"id": "2d451f14-1551-484b-9a8f-b854ec5a8acc", "bridge": "br-int", "label": "tempest-network-smoke--1627200920", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ea527cd-71", "ovs_interfaceid": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:13:07 compute-1 nova_compute[235132]: 2025-10-10 10:13:07.798 2 DEBUG nova.network.os_vif_util [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:d2:11,bridge_name='br-int',has_traffic_filtering=True,id=9ea527cd-71d7-4979-bef2-4cbe7f0038cf,network=Network(2d451f14-1551-484b-9a8f-b854ec5a8acc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ea527cd-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:13:07 compute-1 nova_compute[235132]: 2025-10-10 10:13:07.799 2 DEBUG os_vif [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:d2:11,bridge_name='br-int',has_traffic_filtering=True,id=9ea527cd-71d7-4979-bef2-4cbe7f0038cf,network=Network(2d451f14-1551-484b-9a8f-b854ec5a8acc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ea527cd-71') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 10 10:13:07 compute-1 nova_compute[235132]: 2025-10-10 10:13:07.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:07 compute-1 nova_compute[235132]: 2025-10-10 10:13:07.801 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:13:07 compute-1 nova_compute[235132]: 2025-10-10 10:13:07.802 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 10 10:13:07 compute-1 nova_compute[235132]: 2025-10-10 10:13:07.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:07 compute-1 nova_compute[235132]: 2025-10-10 10:13:07.809 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ea527cd-71, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:13:07 compute-1 nova_compute[235132]: 2025-10-10 10:13:07.810 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9ea527cd-71, col_values=(('external_ids', {'iface-id': '9ea527cd-71d7-4979-bef2-4cbe7f0038cf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:d2:11', 'vm-uuid': '2fe2b257-7e1f-46c2-aed9-0593c533e290'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:13:07 compute-1 nova_compute[235132]: 2025-10-10 10:13:07.812 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:07 compute-1 NetworkManager[44982]: <info>  [1760091187.8139] manager: (tap9ea527cd-71): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Oct 10 10:13:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:07 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:07 compute-1 nova_compute[235132]: 2025-10-10 10:13:07.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 10 10:13:07 compute-1 nova_compute[235132]: 2025-10-10 10:13:07.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:07 compute-1 nova_compute[235132]: 2025-10-10 10:13:07.825 2 INFO os_vif [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:d2:11,bridge_name='br-int',has_traffic_filtering=True,id=9ea527cd-71d7-4979-bef2-4cbe7f0038cf,network=Network(2d451f14-1551-484b-9a8f-b854ec5a8acc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ea527cd-71')
Oct 10 10:13:07 compute-1 nova_compute[235132]: 2025-10-10 10:13:07.826 2 DEBUG nova.virt.libvirt.vif [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:12:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1167416058',display_name='tempest-TestNetworkBasicOps-server-1167416058',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1167416058',id=3,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMTGlkGkqxsntvfgB83G1SiWbnvW7dUHWa7mCt3Bc6Si4+/bYujJEPULms51s1qxf1oywpdTq7/IqVAkQU+odhhf7e0wmrvEXEJnRsSzZiDWmk6FDUoWkd1hV01XsyivgQ==',key_name='tempest-TestNetworkBasicOps-919848835',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:12:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-jd07fe8o',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:12:36Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=2fe2b257-7e1f-46c2-aed9-0593c533e290,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "address": "fa:16:3e:33:d2:11", "network": {"id": "2d451f14-1551-484b-9a8f-b854ec5a8acc", "bridge": "br-int", "label": "tempest-network-smoke--1627200920", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ea527cd-71", "ovs_interfaceid": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 10 10:13:07 compute-1 nova_compute[235132]: 2025-10-10 10:13:07.826 2 DEBUG nova.network.os_vif_util [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "address": "fa:16:3e:33:d2:11", "network": {"id": "2d451f14-1551-484b-9a8f-b854ec5a8acc", "bridge": "br-int", "label": "tempest-network-smoke--1627200920", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ea527cd-71", "ovs_interfaceid": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:13:07 compute-1 nova_compute[235132]: 2025-10-10 10:13:07.827 2 DEBUG nova.network.os_vif_util [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:d2:11,bridge_name='br-int',has_traffic_filtering=True,id=9ea527cd-71d7-4979-bef2-4cbe7f0038cf,network=Network(2d451f14-1551-484b-9a8f-b854ec5a8acc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ea527cd-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:13:07 compute-1 nova_compute[235132]: 2025-10-10 10:13:07.830 2 DEBUG nova.virt.libvirt.guest [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] attach device xml: <interface type="ethernet">
Oct 10 10:13:07 compute-1 nova_compute[235132]:   <mac address="fa:16:3e:33:d2:11"/>
Oct 10 10:13:07 compute-1 nova_compute[235132]:   <model type="virtio"/>
Oct 10 10:13:07 compute-1 nova_compute[235132]:   <driver name="vhost" rx_queue_size="512"/>
Oct 10 10:13:07 compute-1 nova_compute[235132]:   <mtu size="1442"/>
Oct 10 10:13:07 compute-1 nova_compute[235132]:   <target dev="tap9ea527cd-71"/>
Oct 10 10:13:07 compute-1 nova_compute[235132]: </interface>
Oct 10 10:13:07 compute-1 nova_compute[235132]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 10 10:13:07 compute-1 nova_compute[235132]: 2025-10-10 10:13:07.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:07 compute-1 kernel: tap9ea527cd-71: entered promiscuous mode
Oct 10 10:13:07 compute-1 NetworkManager[44982]: <info>  [1760091187.8467] manager: (tap9ea527cd-71): new Tun device (/org/freedesktop/NetworkManager/Devices/40)
Oct 10 10:13:07 compute-1 ovn_controller[131749]: 2025-10-10T10:13:07Z|00045|binding|INFO|Claiming lport 9ea527cd-71d7-4979-bef2-4cbe7f0038cf for this chassis.
Oct 10 10:13:07 compute-1 ovn_controller[131749]: 2025-10-10T10:13:07Z|00046|binding|INFO|9ea527cd-71d7-4979-bef2-4cbe7f0038cf: Claiming fa:16:3e:33:d2:11 10.100.0.19
Oct 10 10:13:07 compute-1 nova_compute[235132]: 2025-10-10 10:13:07.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:07 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:07.863 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:d2:11 10.100.0.19'], port_security=['fa:16:3e:33:d2:11 10.100.0.19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': '2fe2b257-7e1f-46c2-aed9-0593c533e290', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d451f14-1551-484b-9a8f-b854ec5a8acc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '79abf760-0fb0-448c-b5c8-75027ac31ae3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8d7aa34-fd4e-44cc-8eaa-a67a270b663f, chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], logical_port=9ea527cd-71d7-4979-bef2-4cbe7f0038cf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:13:07 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:07.864 141156 INFO neutron.agent.ovn.metadata.agent [-] Port 9ea527cd-71d7-4979-bef2-4cbe7f0038cf in datapath 2d451f14-1551-484b-9a8f-b854ec5a8acc bound to our chassis
Oct 10 10:13:07 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:07.865 141156 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2d451f14-1551-484b-9a8f-b854ec5a8acc
Oct 10 10:13:07 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:07.881 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[500dd0b6-b97d-41cb-9946-f422a37c11b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:07 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:07.882 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2d451f14-11 in ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 10 10:13:07 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:07.884 238898 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2d451f14-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 10 10:13:07 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:07.884 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[c24bdabe-03e3-41db-b609-f6452bcc41e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:07 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:07.885 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[9c0c0bc9-40fa-45bb-bde0-fe6c421088f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:07 compute-1 systemd-udevd[240639]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 10:13:07 compute-1 nova_compute[235132]: 2025-10-10 10:13:07.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:07 compute-1 ovn_controller[131749]: 2025-10-10T10:13:07Z|00047|binding|INFO|Setting lport 9ea527cd-71d7-4979-bef2-4cbe7f0038cf ovn-installed in OVS
Oct 10 10:13:07 compute-1 ovn_controller[131749]: 2025-10-10T10:13:07Z|00048|binding|INFO|Setting lport 9ea527cd-71d7-4979-bef2-4cbe7f0038cf up in Southbound
Oct 10 10:13:07 compute-1 nova_compute[235132]: 2025-10-10 10:13:07.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:07 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:07.907 141275 DEBUG oslo.privsep.daemon [-] privsep: reply[20842d0c-17f1-4e5e-a7d2-60e8fa07b64e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:07 compute-1 NetworkManager[44982]: <info>  [1760091187.9184] device (tap9ea527cd-71): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 10:13:07 compute-1 NetworkManager[44982]: <info>  [1760091187.9193] device (tap9ea527cd-71): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 10:13:07 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:07.929 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[75e2fa74-47e6-473c-aaa6-6399643886ed]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:07.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:07 compute-1 nova_compute[235132]: 2025-10-10 10:13:07.958 2 DEBUG nova.virt.libvirt.driver [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 10 10:13:07 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:07.958 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[fb52d022-2eb1-4e6c-9ce7-0236c7bb36b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:07 compute-1 nova_compute[235132]: 2025-10-10 10:13:07.960 2 DEBUG nova.virt.libvirt.driver [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 10 10:13:07 compute-1 nova_compute[235132]: 2025-10-10 10:13:07.960 2 DEBUG nova.virt.libvirt.driver [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No VIF found with MAC fa:16:3e:8b:9e:3d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 10 10:13:07 compute-1 nova_compute[235132]: 2025-10-10 10:13:07.961 2 DEBUG nova.virt.libvirt.driver [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No VIF found with MAC fa:16:3e:33:d2:11, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 10 10:13:07 compute-1 NetworkManager[44982]: <info>  [1760091187.9647] manager: (tap2d451f14-10): new Veth device (/org/freedesktop/NetworkManager/Devices/41)
Oct 10 10:13:07 compute-1 systemd-udevd[240643]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 10:13:07 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:07.964 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[55890e9c-840f-45fe-a68c-ad6a8aaba63f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:07 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:07.991 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[1342a07c-0793-41c1-b650-5939b64fd6ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:07 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:07.994 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[709c7541-6099-42e0-bf44-9aea060d3b76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:08 compute-1 nova_compute[235132]: 2025-10-10 10:13:08.007 2 DEBUG nova.virt.libvirt.guest [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 10:13:08 compute-1 nova_compute[235132]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 10:13:08 compute-1 nova_compute[235132]:   <nova:name>tempest-TestNetworkBasicOps-server-1167416058</nova:name>
Oct 10 10:13:08 compute-1 nova_compute[235132]:   <nova:creationTime>2025-10-10 10:13:08</nova:creationTime>
Oct 10 10:13:08 compute-1 nova_compute[235132]:   <nova:flavor name="m1.nano">
Oct 10 10:13:08 compute-1 nova_compute[235132]:     <nova:memory>128</nova:memory>
Oct 10 10:13:08 compute-1 nova_compute[235132]:     <nova:disk>1</nova:disk>
Oct 10 10:13:08 compute-1 nova_compute[235132]:     <nova:swap>0</nova:swap>
Oct 10 10:13:08 compute-1 nova_compute[235132]:     <nova:ephemeral>0</nova:ephemeral>
Oct 10 10:13:08 compute-1 nova_compute[235132]:     <nova:vcpus>1</nova:vcpus>
Oct 10 10:13:08 compute-1 nova_compute[235132]:   </nova:flavor>
Oct 10 10:13:08 compute-1 nova_compute[235132]:   <nova:owner>
Oct 10 10:13:08 compute-1 nova_compute[235132]:     <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 10:13:08 compute-1 nova_compute[235132]:     <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 10:13:08 compute-1 nova_compute[235132]:   </nova:owner>
Oct 10 10:13:08 compute-1 nova_compute[235132]:   <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 10:13:08 compute-1 nova_compute[235132]:   <nova:ports>
Oct 10 10:13:08 compute-1 nova_compute[235132]:     <nova:port uuid="eb2cd434-444d-4138-bbe8-948bf47d3986">
Oct 10 10:13:08 compute-1 nova_compute[235132]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 10 10:13:08 compute-1 nova_compute[235132]:     </nova:port>
Oct 10 10:13:08 compute-1 nova_compute[235132]:     <nova:port uuid="9ea527cd-71d7-4979-bef2-4cbe7f0038cf">
Oct 10 10:13:08 compute-1 nova_compute[235132]:       <nova:ip type="fixed" address="10.100.0.19" ipVersion="4"/>
Oct 10 10:13:08 compute-1 nova_compute[235132]:     </nova:port>
Oct 10 10:13:08 compute-1 nova_compute[235132]:   </nova:ports>
Oct 10 10:13:08 compute-1 nova_compute[235132]: </nova:instance>
Oct 10 10:13:08 compute-1 nova_compute[235132]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 10 10:13:08 compute-1 NetworkManager[44982]: <info>  [1760091188.0106] device (tap2d451f14-10): carrier: link connected
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:08.014 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[fe4be0b9-2f30-4afa-bbef-f90839a4901a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:08.036 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[48b563c1-1d05-4842-b033-1d8e4c65c76f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2d451f14-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:ce:51'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 410172, 'reachable_time': 18646, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240665, 'error': None, 'target': 'ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:08 compute-1 nova_compute[235132]: 2025-10-10 10:13:08.046 2 DEBUG oslo_concurrency.lockutils [None req-7dbf6b84-9beb-4e6c-b39a-98c783efc096 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "interface-2fe2b257-7e1f-46c2-aed9-0593c533e290-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:08.052 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[84bb6b73-68a6-4356-924c-bcac4e3026c5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1d:ce51'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 410172, 'tstamp': 410172}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240666, 'error': None, 'target': 'ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:08.071 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[5163dee8-72bf-4316-8eb7-af3865d8264a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2d451f14-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1d:ce:51'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 410172, 'reachable_time': 18646, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 240667, 'error': None, 'target': 'ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:08.097 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[30bf8c62-b034-41ec-b2df-a691e4cdbdf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:08.153 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[05cfe489-a543-4bab-a2f7-a471f0cee972]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:08.155 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d451f14-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:08.155 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:08.155 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d451f14-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:13:08 compute-1 nova_compute[235132]: 2025-10-10 10:13:08.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:08 compute-1 NetworkManager[44982]: <info>  [1760091188.1578] manager: (tap2d451f14-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Oct 10 10:13:08 compute-1 kernel: tap2d451f14-10: entered promiscuous mode
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:08.160 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2d451f14-10, col_values=(('external_ids', {'iface-id': '3bbca16e-9180-468e-a8f6-96640db7dad5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:13:08 compute-1 nova_compute[235132]: 2025-10-10 10:13:08.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:08 compute-1 ovn_controller[131749]: 2025-10-10T10:13:08Z|00049|binding|INFO|Releasing lport 3bbca16e-9180-468e-a8f6-96640db7dad5 from this chassis (sb_readonly=0)
Oct 10 10:13:08 compute-1 nova_compute[235132]: 2025-10-10 10:13:08.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:08.176 141156 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2d451f14-1551-484b-9a8f-b854ec5a8acc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2d451f14-1551-484b-9a8f-b854ec5a8acc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:08.177 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[f2031644-88ea-49e7-aceb-2d895c95992a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:08.178 141156 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]: global
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]:     log         /dev/log local0 debug
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]:     log-tag     haproxy-metadata-proxy-2d451f14-1551-484b-9a8f-b854ec5a8acc
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]:     user        root
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]:     group       root
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]:     maxconn     1024
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]:     pidfile     /var/lib/neutron/external/pids/2d451f14-1551-484b-9a8f-b854ec5a8acc.pid.haproxy
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]:     daemon
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]: 
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]: defaults
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]:     log global
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]:     mode http
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]:     option httplog
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]:     option dontlognull
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]:     option http-server-close
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]:     option forwardfor
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]:     retries                 3
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]:     timeout http-request    30s
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]:     timeout connect         30s
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]:     timeout client          32s
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]:     timeout server          32s
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]:     timeout http-keep-alive 30s
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]: 
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]: 
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]: listen listener
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]:     bind 169.254.169.254:80
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]:     server metadata /var/lib/neutron/metadata_proxy
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]:     http-request add-header X-OVN-Network-ID 2d451f14-1551-484b-9a8f-b854ec5a8acc
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 10 10:13:08 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:08.179 141156 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc', 'env', 'PROCESS_TAG=haproxy-2d451f14-1551-484b-9a8f-b854ec5a8acc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2d451f14-1551-484b-9a8f-b854ec5a8acc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 10 10:13:08 compute-1 nova_compute[235132]: 2025-10-10 10:13:08.202 2 DEBUG nova.compute.manager [req-3841684d-4b41-4c11-b2a8-4d09aeb8ccd7 req-c0ae5e25-52ff-40e4-aa1c-fba14bad8e24 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received event network-vif-plugged-9ea527cd-71d7-4979-bef2-4cbe7f0038cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:13:08 compute-1 nova_compute[235132]: 2025-10-10 10:13:08.202 2 DEBUG oslo_concurrency.lockutils [req-3841684d-4b41-4c11-b2a8-4d09aeb8ccd7 req-c0ae5e25-52ff-40e4-aa1c-fba14bad8e24 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:13:08 compute-1 nova_compute[235132]: 2025-10-10 10:13:08.203 2 DEBUG oslo_concurrency.lockutils [req-3841684d-4b41-4c11-b2a8-4d09aeb8ccd7 req-c0ae5e25-52ff-40e4-aa1c-fba14bad8e24 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:13:08 compute-1 nova_compute[235132]: 2025-10-10 10:13:08.203 2 DEBUG oslo_concurrency.lockutils [req-3841684d-4b41-4c11-b2a8-4d09aeb8ccd7 req-c0ae5e25-52ff-40e4-aa1c-fba14bad8e24 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:13:08 compute-1 nova_compute[235132]: 2025-10-10 10:13:08.204 2 DEBUG nova.compute.manager [req-3841684d-4b41-4c11-b2a8-4d09aeb8ccd7 req-c0ae5e25-52ff-40e4-aa1c-fba14bad8e24 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] No waiting events found dispatching network-vif-plugged-9ea527cd-71d7-4979-bef2-4cbe7f0038cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:13:08 compute-1 nova_compute[235132]: 2025-10-10 10:13:08.204 2 WARNING nova.compute.manager [req-3841684d-4b41-4c11-b2a8-4d09aeb8ccd7 req-c0ae5e25-52ff-40e4-aa1c-fba14bad8e24 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received unexpected event network-vif-plugged-9ea527cd-71d7-4979-bef2-4cbe7f0038cf for instance with vm_state active and task_state None.
Oct 10 10:13:08 compute-1 ceph-mon[79167]: pgmap v815: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 14 KiB/s wr, 1 op/s
Oct 10 10:13:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:08.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:08 compute-1 sshd-session[240622]: Failed password for root from 193.46.255.159 port 23852 ssh2
Oct 10 10:13:08 compute-1 podman[240699]: 2025-10-10 10:13:08.679814992 +0000 UTC m=+0.092043527 container create a51ba6690d17c2e1a677a2493334040180ae60132b9142f7651626ab76e08adc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 10 10:13:08 compute-1 systemd[1]: Started libpod-conmon-a51ba6690d17c2e1a677a2493334040180ae60132b9142f7651626ab76e08adc.scope.
Oct 10 10:13:08 compute-1 podman[240699]: 2025-10-10 10:13:08.636489198 +0000 UTC m=+0.048717773 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 10 10:13:08 compute-1 systemd[1]: Started libcrun container.
Oct 10 10:13:08 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e424e3f4e3815f244fa79fcc7f0f5daf62663db4a716ad6c422fb36d7b3a0dc9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 10:13:08 compute-1 podman[240699]: 2025-10-10 10:13:08.769275108 +0000 UTC m=+0.181503693 container init a51ba6690d17c2e1a677a2493334040180ae60132b9142f7651626ab76e08adc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 10 10:13:08 compute-1 podman[240699]: 2025-10-10 10:13:08.779912409 +0000 UTC m=+0.192140944 container start a51ba6690d17c2e1a677a2493334040180ae60132b9142f7651626ab76e08adc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:13:08 compute-1 neutron-haproxy-ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc[240714]: [NOTICE]   (240718) : New worker (240720) forked
Oct 10 10:13:08 compute-1 neutron-haproxy-ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc[240714]: [NOTICE]   (240718) : Loading success.
Oct 10 10:13:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:08 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340001b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:09 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_22] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:09 compute-1 ovn_controller[131749]: 2025-10-10T10:13:09Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:33:d2:11 10.100.0.19
Oct 10 10:13:09 compute-1 ovn_controller[131749]: 2025-10-10T10:13:09Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:33:d2:11 10.100.0.19
Oct 10 10:13:09 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:09 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:09.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:09 compute-1 podman[240730]: 2025-10-10 10:13:09.966019348 +0000 UTC m=+0.064335860 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.117 2 DEBUG oslo_concurrency.lockutils [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "interface-2fe2b257-7e1f-46c2-aed9-0593c533e290-9ea527cd-71d7-4979-bef2-4cbe7f0038cf" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.117 2 DEBUG oslo_concurrency.lockutils [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "interface-2fe2b257-7e1f-46c2-aed9-0593c533e290-9ea527cd-71d7-4979-bef2-4cbe7f0038cf" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.138 2 DEBUG nova.objects.instance [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'flavor' on Instance uuid 2fe2b257-7e1f-46c2-aed9-0593c533e290 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.172 2 DEBUG nova.virt.libvirt.vif [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:12:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1167416058',display_name='tempest-TestNetworkBasicOps-server-1167416058',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1167416058',id=3,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMTGlkGkqxsntvfgB83G1SiWbnvW7dUHWa7mCt3Bc6Si4+/bYujJEPULms51s1qxf1oywpdTq7/IqVAkQU+odhhf7e0wmrvEXEJnRsSzZiDWmk6FDUoWkd1hV01XsyivgQ==',key_name='tempest-TestNetworkBasicOps-919848835',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:12:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-jd07fe8o',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:12:36Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=2fe2b257-7e1f-46c2-aed9-0593c533e290,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "address": "fa:16:3e:33:d2:11", "network": {"id": "2d451f14-1551-484b-9a8f-b854ec5a8acc", "bridge": "br-int", "label": "tempest-network-smoke--1627200920", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ea527cd-71", "ovs_interfaceid": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.173 2 DEBUG nova.network.os_vif_util [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "address": "fa:16:3e:33:d2:11", "network": {"id": "2d451f14-1551-484b-9a8f-b854ec5a8acc", "bridge": "br-int", "label": "tempest-network-smoke--1627200920", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ea527cd-71", "ovs_interfaceid": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.174 2 DEBUG nova.network.os_vif_util [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:d2:11,bridge_name='br-int',has_traffic_filtering=True,id=9ea527cd-71d7-4979-bef2-4cbe7f0038cf,network=Network(2d451f14-1551-484b-9a8f-b854ec5a8acc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ea527cd-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.179 2 DEBUG nova.virt.libvirt.guest [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:33:d2:11"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9ea527cd-71"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.182 2 DEBUG nova.virt.libvirt.guest [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:33:d2:11"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9ea527cd-71"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.186 2 DEBUG nova.virt.libvirt.driver [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Attempting to detach device tap9ea527cd-71 from instance 2fe2b257-7e1f-46c2-aed9-0593c533e290 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.187 2 DEBUG nova.virt.libvirt.guest [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] detach device xml: <interface type="ethernet">
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <mac address="fa:16:3e:33:d2:11"/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <model type="virtio"/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <driver name="vhost" rx_queue_size="512"/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <mtu size="1442"/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <target dev="tap9ea527cd-71"/>
Oct 10 10:13:10 compute-1 nova_compute[235132]: </interface>
Oct 10 10:13:10 compute-1 nova_compute[235132]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.196 2 DEBUG nova.virt.libvirt.guest [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:33:d2:11"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9ea527cd-71"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.201 2 DEBUG nova.virt.libvirt.guest [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:33:d2:11"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9ea527cd-71"/></interface>not found in domain: <domain type='kvm' id='2'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <name>instance-00000003</name>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <uuid>2fe2b257-7e1f-46c2-aed9-0593c533e290</uuid>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <metadata>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <nova:name>tempest-TestNetworkBasicOps-server-1167416058</nova:name>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <nova:creationTime>2025-10-10 10:13:08</nova:creationTime>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <nova:flavor name="m1.nano">
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <nova:memory>128</nova:memory>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <nova:disk>1</nova:disk>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <nova:swap>0</nova:swap>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <nova:ephemeral>0</nova:ephemeral>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <nova:vcpus>1</nova:vcpus>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   </nova:flavor>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <nova:owner>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   </nova:owner>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <nova:ports>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <nova:port uuid="eb2cd434-444d-4138-bbe8-948bf47d3986">
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </nova:port>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <nova:port uuid="9ea527cd-71d7-4979-bef2-4cbe7f0038cf">
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <nova:ip type="fixed" address="10.100.0.19" ipVersion="4"/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </nova:port>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   </nova:ports>
Oct 10 10:13:10 compute-1 nova_compute[235132]: </nova:instance>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   </metadata>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <memory unit='KiB'>131072</memory>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <vcpu placement='static'>1</vcpu>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <resource>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <partition>/machine</partition>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   </resource>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <sysinfo type='smbios'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <system>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <entry name='manufacturer'>RDO</entry>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <entry name='product'>OpenStack Compute</entry>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <entry name='serial'>2fe2b257-7e1f-46c2-aed9-0593c533e290</entry>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <entry name='uuid'>2fe2b257-7e1f-46c2-aed9-0593c533e290</entry>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <entry name='family'>Virtual Machine</entry>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </system>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   </sysinfo>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <os>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <boot dev='hd'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <smbios mode='sysinfo'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   </os>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <features>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <acpi/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <apic/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <vmcoreinfo state='on'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   </features>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <cpu mode='custom' match='exact' check='full'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <vendor>AMD</vendor>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='x2apic'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='tsc-deadline'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='hypervisor'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='tsc_adjust'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='spec-ctrl'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='stibp'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='arch-capabilities'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='ssbd'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='cmp_legacy'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='overflow-recov'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='succor'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='ibrs'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='amd-ssbd'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='virt-ssbd'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='disable' name='lbrv'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='disable' name='tsc-scale'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='disable' name='vmcb-clean'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='disable' name='flushbyasid'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='disable' name='pause-filter'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='disable' name='pfthreshold'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='rdctl-no'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='mds-no'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='pschange-mc-no'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='gds-no'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='rfds-no'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='disable' name='xsaves'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='disable' name='svm'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='topoext'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='disable' name='npt'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='disable' name='nrip-save'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   </cpu>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <clock offset='utc'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <timer name='pit' tickpolicy='delay'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <timer name='hpet' present='no'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   </clock>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <on_poweroff>destroy</on_poweroff>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <on_reboot>restart</on_reboot>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <on_crash>destroy</on_crash>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <devices>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <disk type='network' device='disk'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <driver name='qemu' type='raw' cache='none'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <auth username='openstack'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:         <secret type='ceph' uuid='21f084a3-af34-5230-afe4-ea5cd24a55f4'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       </auth>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <source protocol='rbd' name='vms/2fe2b257-7e1f-46c2-aed9-0593c533e290_disk' index='2'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:         <host name='192.168.122.100' port='6789'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:         <host name='192.168.122.102' port='6789'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:         <host name='192.168.122.101' port='6789'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       </source>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target dev='vda' bus='virtio'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='virtio-disk0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </disk>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <disk type='network' device='cdrom'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <driver name='qemu' type='raw' cache='none'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <auth username='openstack'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:         <secret type='ceph' uuid='21f084a3-af34-5230-afe4-ea5cd24a55f4'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       </auth>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <source protocol='rbd' name='vms/2fe2b257-7e1f-46c2-aed9-0593c533e290_disk.config' index='1'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:         <host name='192.168.122.100' port='6789'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:         <host name='192.168.122.102' port='6789'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:         <host name='192.168.122.101' port='6789'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       </source>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target dev='sda' bus='sata'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <readonly/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='sata0-0-0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </disk>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='0' model='pcie-root'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pcie.0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='1' port='0x10'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.1'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='2' port='0x11'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.2'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='3' port='0x12'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.3'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='4' port='0x13'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.4'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='5' port='0x14'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.5'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='6' port='0x15'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.6'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='7' port='0x16'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.7'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='8' port='0x17'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.8'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='9' port='0x18'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.9'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='10' port='0x19'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.10'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='11' port='0x1a'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.11'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='12' port='0x1b'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.12'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='13' port='0x1c'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.13'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='14' port='0x1d'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.14'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='15' port='0x1e'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.15'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='16' port='0x1f'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.16'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='17' port='0x20'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.17'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='18' port='0x21'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.18'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='19' port='0x22'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.19'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='20' port='0x23'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.20'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='21' port='0x24'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.21'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='22' port='0x25'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.22'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='23' port='0x26'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.23'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='24' port='0x27'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.24'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='25' port='0x28'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.25'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-pci-bridge'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.26'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='usb'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='sata' index='0'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='ide'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <interface type='ethernet'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <mac address='fa:16:3e:8b:9e:3d'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target dev='tapeb2cd434-44'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model type='virtio'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <driver name='vhost' rx_queue_size='512'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <mtu size='1442'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='net0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </interface>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <interface type='ethernet'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <mac address='fa:16:3e:33:d2:11'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target dev='tap9ea527cd-71'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model type='virtio'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <driver name='vhost' rx_queue_size='512'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <mtu size='1442'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='net1'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </interface>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <serial type='pty'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <source path='/dev/pts/0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <log file='/var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290/console.log' append='off'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target type='isa-serial' port='0'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:         <model name='isa-serial'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       </target>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='serial0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </serial>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <console type='pty' tty='/dev/pts/0'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <source path='/dev/pts/0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <log file='/var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290/console.log' append='off'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target type='serial' port='0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='serial0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </console>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <input type='tablet' bus='usb'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='input0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='usb' bus='0' port='1'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </input>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <input type='mouse' bus='ps2'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='input1'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </input>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <input type='keyboard' bus='ps2'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='input2'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </input>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <listen type='address' address='::0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </graphics>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <audio id='1' type='none'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <video>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model type='virtio' heads='1' primary='yes'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='video0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </video>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <watchdog model='itco' action='reset'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='watchdog0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </watchdog>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <memballoon model='virtio'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <stats period='10'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='balloon0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </memballoon>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <rng model='virtio'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <backend model='random'>/dev/urandom</backend>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='rng0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </rng>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   </devices>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <label>system_u:system_r:svirt_t:s0:c160,c921</label>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c160,c921</imagelabel>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   </seclabel>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <label>+107:+107</label>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <imagelabel>+107:+107</imagelabel>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   </seclabel>
Oct 10 10:13:10 compute-1 nova_compute[235132]: </domain>
Oct 10 10:13:10 compute-1 nova_compute[235132]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.202 2 INFO nova.virt.libvirt.driver [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully detached device tap9ea527cd-71 from instance 2fe2b257-7e1f-46c2-aed9-0593c533e290 from the persistent domain config.
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.203 2 DEBUG nova.virt.libvirt.driver [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] (1/8): Attempting to detach device tap9ea527cd-71 with device alias net1 from instance 2fe2b257-7e1f-46c2-aed9-0593c533e290 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.203 2 DEBUG nova.virt.libvirt.guest [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] detach device xml: <interface type="ethernet">
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <mac address="fa:16:3e:33:d2:11"/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <model type="virtio"/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <driver name="vhost" rx_queue_size="512"/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <mtu size="1442"/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <target dev="tap9ea527cd-71"/>
Oct 10 10:13:10 compute-1 nova_compute[235132]: </interface>
Oct 10 10:13:10 compute-1 nova_compute[235132]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 10 10:13:10 compute-1 ceph-mon[79167]: pgmap v816: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 15 KiB/s wr, 1 op/s
Oct 10 10:13:10 compute-1 kernel: tap9ea527cd-71 (unregistering): left promiscuous mode
Oct 10 10:13:10 compute-1 NetworkManager[44982]: <info>  [1760091190.3161] device (tap9ea527cd-71): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.322 2 DEBUG nova.compute.manager [req-039c7b72-9e39-4670-8134-7616c1cea0f2 req-91cdee6a-57e3-4835-ba71-6b48577742f7 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received event network-vif-plugged-9ea527cd-71d7-4979-bef2-4cbe7f0038cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.322 2 DEBUG oslo_concurrency.lockutils [req-039c7b72-9e39-4670-8134-7616c1cea0f2 req-91cdee6a-57e3-4835-ba71-6b48577742f7 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.322 2 DEBUG oslo_concurrency.lockutils [req-039c7b72-9e39-4670-8134-7616c1cea0f2 req-91cdee6a-57e3-4835-ba71-6b48577742f7 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.322 2 DEBUG oslo_concurrency.lockutils [req-039c7b72-9e39-4670-8134-7616c1cea0f2 req-91cdee6a-57e3-4835-ba71-6b48577742f7 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.323 2 DEBUG nova.compute.manager [req-039c7b72-9e39-4670-8134-7616c1cea0f2 req-91cdee6a-57e3-4835-ba71-6b48577742f7 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] No waiting events found dispatching network-vif-plugged-9ea527cd-71d7-4979-bef2-4cbe7f0038cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.323 2 WARNING nova.compute.manager [req-039c7b72-9e39-4670-8134-7616c1cea0f2 req-91cdee6a-57e3-4835-ba71-6b48577742f7 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received unexpected event network-vif-plugged-9ea527cd-71d7-4979-bef2-4cbe7f0038cf for instance with vm_state active and task_state None.
Oct 10 10:13:10 compute-1 ovn_controller[131749]: 2025-10-10T10:13:10Z|00050|binding|INFO|Releasing lport 9ea527cd-71d7-4979-bef2-4cbe7f0038cf from this chassis (sb_readonly=0)
Oct 10 10:13:10 compute-1 ovn_controller[131749]: 2025-10-10T10:13:10Z|00051|binding|INFO|Setting lport 9ea527cd-71d7-4979-bef2-4cbe7f0038cf down in Southbound
Oct 10 10:13:10 compute-1 ovn_controller[131749]: 2025-10-10T10:13:10Z|00052|binding|INFO|Removing iface tap9ea527cd-71 ovn-installed in OVS
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:10 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:10.333 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:d2:11 10.100.0.19'], port_security=['fa:16:3e:33:d2:11 10.100.0.19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': '2fe2b257-7e1f-46c2-aed9-0593c533e290', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d451f14-1551-484b-9a8f-b854ec5a8acc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '79abf760-0fb0-448c-b5c8-75027ac31ae3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8d7aa34-fd4e-44cc-8eaa-a67a270b663f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], logical_port=9ea527cd-71d7-4979-bef2-4cbe7f0038cf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:13:10 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:10.335 141156 INFO neutron.agent.ovn.metadata.agent [-] Port 9ea527cd-71d7-4979-bef2-4cbe7f0038cf in datapath 2d451f14-1551-484b-9a8f-b854ec5a8acc unbound from our chassis
Oct 10 10:13:10 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:10.336 141156 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2d451f14-1551-484b-9a8f-b854ec5a8acc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 10 10:13:10 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:10.337 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[775944dd-957f-4f40-b936-569a223cedf5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:10 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:10.337 141156 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc namespace which is not needed anymore
Oct 10 10:13:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:13:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:10.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.343 2 DEBUG nova.virt.libvirt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Received event <DeviceRemovedEvent: 1760091190.3428771, 2fe2b257-7e1f-46c2-aed9-0593c533e290 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.345 2 DEBUG nova.virt.libvirt.driver [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Start waiting for the detach event from libvirt for device tap9ea527cd-71 with device alias net1 for instance 2fe2b257-7e1f-46c2-aed9-0593c533e290 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.346 2 DEBUG nova.virt.libvirt.guest [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:33:d2:11"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9ea527cd-71"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.353 2 DEBUG nova.virt.libvirt.guest [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:33:d2:11"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9ea527cd-71"/></interface>not found in domain: <domain type='kvm' id='2'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <name>instance-00000003</name>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <uuid>2fe2b257-7e1f-46c2-aed9-0593c533e290</uuid>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <metadata>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <nova:name>tempest-TestNetworkBasicOps-server-1167416058</nova:name>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <nova:creationTime>2025-10-10 10:13:08</nova:creationTime>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <nova:flavor name="m1.nano">
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <nova:memory>128</nova:memory>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <nova:disk>1</nova:disk>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <nova:swap>0</nova:swap>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <nova:ephemeral>0</nova:ephemeral>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <nova:vcpus>1</nova:vcpus>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   </nova:flavor>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <nova:owner>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   </nova:owner>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <nova:ports>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <nova:port uuid="eb2cd434-444d-4138-bbe8-948bf47d3986">
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </nova:port>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <nova:port uuid="9ea527cd-71d7-4979-bef2-4cbe7f0038cf">
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <nova:ip type="fixed" address="10.100.0.19" ipVersion="4"/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </nova:port>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   </nova:ports>
Oct 10 10:13:10 compute-1 nova_compute[235132]: </nova:instance>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   </metadata>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <memory unit='KiB'>131072</memory>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <vcpu placement='static'>1</vcpu>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <resource>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <partition>/machine</partition>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   </resource>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <sysinfo type='smbios'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <system>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <entry name='manufacturer'>RDO</entry>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <entry name='product'>OpenStack Compute</entry>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <entry name='serial'>2fe2b257-7e1f-46c2-aed9-0593c533e290</entry>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <entry name='uuid'>2fe2b257-7e1f-46c2-aed9-0593c533e290</entry>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <entry name='family'>Virtual Machine</entry>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </system>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   </sysinfo>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <os>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <boot dev='hd'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <smbios mode='sysinfo'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   </os>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <features>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <acpi/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <apic/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <vmcoreinfo state='on'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   </features>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <cpu mode='custom' match='exact' check='full'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <vendor>AMD</vendor>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='x2apic'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='tsc-deadline'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='hypervisor'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='tsc_adjust'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='spec-ctrl'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='stibp'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='arch-capabilities'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='ssbd'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='cmp_legacy'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='overflow-recov'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='succor'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='ibrs'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='amd-ssbd'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='virt-ssbd'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='disable' name='lbrv'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='disable' name='tsc-scale'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='disable' name='vmcb-clean'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='disable' name='flushbyasid'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='disable' name='pause-filter'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='disable' name='pfthreshold'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='rdctl-no'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='mds-no'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='pschange-mc-no'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='gds-no'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='rfds-no'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='disable' name='xsaves'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='disable' name='svm'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='require' name='topoext'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='disable' name='npt'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <feature policy='disable' name='nrip-save'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   </cpu>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <clock offset='utc'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <timer name='pit' tickpolicy='delay'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <timer name='hpet' present='no'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   </clock>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <on_poweroff>destroy</on_poweroff>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <on_reboot>restart</on_reboot>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <on_crash>destroy</on_crash>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <devices>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <disk type='network' device='disk'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <driver name='qemu' type='raw' cache='none'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <auth username='openstack'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:         <secret type='ceph' uuid='21f084a3-af34-5230-afe4-ea5cd24a55f4'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       </auth>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <source protocol='rbd' name='vms/2fe2b257-7e1f-46c2-aed9-0593c533e290_disk' index='2'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:         <host name='192.168.122.100' port='6789'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:         <host name='192.168.122.102' port='6789'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:         <host name='192.168.122.101' port='6789'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       </source>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target dev='vda' bus='virtio'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='virtio-disk0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </disk>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <disk type='network' device='cdrom'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <driver name='qemu' type='raw' cache='none'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <auth username='openstack'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:         <secret type='ceph' uuid='21f084a3-af34-5230-afe4-ea5cd24a55f4'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       </auth>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <source protocol='rbd' name='vms/2fe2b257-7e1f-46c2-aed9-0593c533e290_disk.config' index='1'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:         <host name='192.168.122.100' port='6789'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:         <host name='192.168.122.102' port='6789'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:         <host name='192.168.122.101' port='6789'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       </source>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target dev='sda' bus='sata'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <readonly/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='sata0-0-0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </disk>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='0' model='pcie-root'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pcie.0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='1' port='0x10'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.1'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='2' port='0x11'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.2'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='3' port='0x12'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.3'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='4' port='0x13'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.4'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='5' port='0x14'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.5'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='6' port='0x15'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.6'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='7' port='0x16'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.7'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='8' port='0x17'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.8'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='9' port='0x18'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.9'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='10' port='0x19'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.10'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='11' port='0x1a'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.11'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='12' port='0x1b'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.12'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='13' port='0x1c'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.13'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='14' port='0x1d'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.14'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='15' port='0x1e'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.15'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='16' port='0x1f'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.16'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='17' port='0x20'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.17'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='18' port='0x21'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.18'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='19' port='0x22'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.19'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='20' port='0x23'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.20'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='21' port='0x24'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.21'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='22' port='0x25'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.22'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='23' port='0x26'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.23'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='24' port='0x27'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.24'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target chassis='25' port='0x28'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.25'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model name='pcie-pci-bridge'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='pci.26'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='usb'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <controller type='sata' index='0'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='ide'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <interface type='ethernet'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <mac address='fa:16:3e:8b:9e:3d'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target dev='tapeb2cd434-44'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model type='virtio'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <driver name='vhost' rx_queue_size='512'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <mtu size='1442'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='net0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </interface>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <serial type='pty'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <source path='/dev/pts/0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <log file='/var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290/console.log' append='off'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target type='isa-serial' port='0'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:         <model name='isa-serial'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       </target>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='serial0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </serial>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <console type='pty' tty='/dev/pts/0'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <source path='/dev/pts/0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <log file='/var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290/console.log' append='off'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <target type='serial' port='0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='serial0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </console>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <input type='tablet' bus='usb'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='input0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='usb' bus='0' port='1'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </input>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <input type='mouse' bus='ps2'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='input1'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </input>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <input type='keyboard' bus='ps2'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='input2'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </input>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <listen type='address' address='::0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </graphics>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <audio id='1' type='none'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <video>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <model type='virtio' heads='1' primary='yes'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='video0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </video>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <watchdog model='itco' action='reset'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='watchdog0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </watchdog>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <memballoon model='virtio'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <stats period='10'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='balloon0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </memballoon>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <rng model='virtio'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <backend model='random'>/dev/urandom</backend>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <alias name='rng0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </rng>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   </devices>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <label>system_u:system_r:svirt_t:s0:c160,c921</label>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c160,c921</imagelabel>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   </seclabel>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <label>+107:+107</label>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <imagelabel>+107:+107</imagelabel>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   </seclabel>
Oct 10 10:13:10 compute-1 nova_compute[235132]: </domain>
Oct 10 10:13:10 compute-1 nova_compute[235132]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.353 2 INFO nova.virt.libvirt.driver [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully detached device tap9ea527cd-71 from instance 2fe2b257-7e1f-46c2-aed9-0593c533e290 from the live domain config.
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.354 2 DEBUG nova.virt.libvirt.vif [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:12:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1167416058',display_name='tempest-TestNetworkBasicOps-server-1167416058',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1167416058',id=3,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMTGlkGkqxsntvfgB83G1SiWbnvW7dUHWa7mCt3Bc6Si4+/bYujJEPULms51s1qxf1oywpdTq7/IqVAkQU+odhhf7e0wmrvEXEJnRsSzZiDWmk6FDUoWkd1hV01XsyivgQ==',key_name='tempest-TestNetworkBasicOps-919848835',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:12:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-jd07fe8o',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:12:36Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=2fe2b257-7e1f-46c2-aed9-0593c533e290,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "address": "fa:16:3e:33:d2:11", "network": {"id": "2d451f14-1551-484b-9a8f-b854ec5a8acc", "bridge": "br-int", "label": "tempest-network-smoke--1627200920", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ea527cd-71", "ovs_interfaceid": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.354 2 DEBUG nova.network.os_vif_util [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "address": "fa:16:3e:33:d2:11", "network": {"id": "2d451f14-1551-484b-9a8f-b854ec5a8acc", "bridge": "br-int", "label": "tempest-network-smoke--1627200920", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ea527cd-71", "ovs_interfaceid": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.355 2 DEBUG nova.network.os_vif_util [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:d2:11,bridge_name='br-int',has_traffic_filtering=True,id=9ea527cd-71d7-4979-bef2-4cbe7f0038cf,network=Network(2d451f14-1551-484b-9a8f-b854ec5a8acc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ea527cd-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.355 2 DEBUG os_vif [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:d2:11,bridge_name='br-int',has_traffic_filtering=True,id=9ea527cd-71d7-4979-bef2-4cbe7f0038cf,network=Network(2d451f14-1551-484b-9a8f-b854ec5a8acc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ea527cd-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.357 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ea527cd-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.364 2 INFO os_vif [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:d2:11,bridge_name='br-int',has_traffic_filtering=True,id=9ea527cd-71d7-4979-bef2-4cbe7f0038cf,network=Network(2d451f14-1551-484b-9a8f-b854ec5a8acc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ea527cd-71')
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.365 2 DEBUG nova.virt.libvirt.guest [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <nova:name>tempest-TestNetworkBasicOps-server-1167416058</nova:name>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <nova:creationTime>2025-10-10 10:13:10</nova:creationTime>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <nova:flavor name="m1.nano">
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <nova:memory>128</nova:memory>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <nova:disk>1</nova:disk>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <nova:swap>0</nova:swap>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <nova:ephemeral>0</nova:ephemeral>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <nova:vcpus>1</nova:vcpus>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   </nova:flavor>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <nova:owner>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   </nova:owner>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   <nova:ports>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     <nova:port uuid="eb2cd434-444d-4138-bbe8-948bf47d3986">
Oct 10 10:13:10 compute-1 nova_compute[235132]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 10 10:13:10 compute-1 nova_compute[235132]:     </nova:port>
Oct 10 10:13:10 compute-1 nova_compute[235132]:   </nova:ports>
Oct 10 10:13:10 compute-1 nova_compute[235132]: </nova:instance>
Oct 10 10:13:10 compute-1 nova_compute[235132]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 10 10:13:10 compute-1 neutron-haproxy-ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc[240714]: [NOTICE]   (240718) : haproxy version is 2.8.14-c23fe91
Oct 10 10:13:10 compute-1 neutron-haproxy-ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc[240714]: [NOTICE]   (240718) : path to executable is /usr/sbin/haproxy
Oct 10 10:13:10 compute-1 neutron-haproxy-ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc[240714]: [WARNING]  (240718) : Exiting Master process...
Oct 10 10:13:10 compute-1 neutron-haproxy-ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc[240714]: [ALERT]    (240718) : Current worker (240720) exited with code 143 (Terminated)
Oct 10 10:13:10 compute-1 neutron-haproxy-ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc[240714]: [WARNING]  (240718) : All workers exited. Exiting... (0)
Oct 10 10:13:10 compute-1 systemd[1]: libpod-a51ba6690d17c2e1a677a2493334040180ae60132b9142f7651626ab76e08adc.scope: Deactivated successfully.
Oct 10 10:13:10 compute-1 podman[240769]: 2025-10-10 10:13:10.495142863 +0000 UTC m=+0.053065041 container died a51ba6690d17c2e1a677a2493334040180ae60132b9142f7651626ab76e08adc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:13:10 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a51ba6690d17c2e1a677a2493334040180ae60132b9142f7651626ab76e08adc-userdata-shm.mount: Deactivated successfully.
Oct 10 10:13:10 compute-1 systemd[1]: var-lib-containers-storage-overlay-e424e3f4e3815f244fa79fcc7f0f5daf62663db4a716ad6c422fb36d7b3a0dc9-merged.mount: Deactivated successfully.
Oct 10 10:13:10 compute-1 podman[240769]: 2025-10-10 10:13:10.547545976 +0000 UTC m=+0.105468174 container cleanup a51ba6690d17c2e1a677a2493334040180ae60132b9142f7651626ab76e08adc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 10 10:13:10 compute-1 systemd[1]: libpod-conmon-a51ba6690d17c2e1a677a2493334040180ae60132b9142f7651626ab76e08adc.scope: Deactivated successfully.
Oct 10 10:13:10 compute-1 podman[240800]: 2025-10-10 10:13:10.613711056 +0000 UTC m=+0.043739148 container remove a51ba6690d17c2e1a677a2493334040180ae60132b9142f7651626ab76e08adc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:13:10 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:10.622 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[70d36e82-5601-4705-a542-4dbe5758d928]: (4, ('Fri Oct 10 10:13:10 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc (a51ba6690d17c2e1a677a2493334040180ae60132b9142f7651626ab76e08adc)\na51ba6690d17c2e1a677a2493334040180ae60132b9142f7651626ab76e08adc\nFri Oct 10 10:13:10 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc (a51ba6690d17c2e1a677a2493334040180ae60132b9142f7651626ab76e08adc)\na51ba6690d17c2e1a677a2493334040180ae60132b9142f7651626ab76e08adc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:10 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:10.626 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[1a677852-acb5-4723-a518-1decca262504]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:10 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:10.627 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d451f14-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:10 compute-1 kernel: tap2d451f14-10: left promiscuous mode
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:10 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:10.661 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[7d83854b-d2d8-45b1-bc6b-aaff098cdf10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:10 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:10.694 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[00213736-2cf2-4eff-a9af-2fb29cdf476a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:10 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:10.696 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[4ed6c32b-c173-4c2c-882c-a0c63c85dcfc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:10 compute-1 sshd-session[240622]: Received disconnect from 193.46.255.159 port 23852:11:  [preauth]
Oct 10 10:13:10 compute-1 sshd-session[240622]: Disconnected from authenticating user root 193.46.255.159 port 23852 [preauth]
Oct 10 10:13:10 compute-1 sshd-session[240622]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.159  user=root
Oct 10 10:13:10 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:10.715 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed78d5e-9caa-4eb4-b35d-0ffb6c005620]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 410166, 'reachable_time': 42124, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240815, 'error': None, 'target': 'ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:10 compute-1 systemd[1]: run-netns-ovnmeta\x2d2d451f14\x2d1551\x2d484b\x2d9a8f\x2db854ec5a8acc.mount: Deactivated successfully.
Oct 10 10:13:10 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:10.721 141275 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2d451f14-1551-484b-9a8f-b854ec5a8acc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 10 10:13:10 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:10.721 141275 DEBUG oslo.privsep.daemon [-] privsep: reply[51f09669-537b-4a22-a3fb-269f2aa8409d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.813 2 DEBUG nova.network.neutron [req-81069918-9143-4e13-b82f-629d695d2a1c req-fbb21ddf-7861-4b13-92e6-a89d18e73b73 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Updated VIF entry in instance network info cache for port 9ea527cd-71d7-4979-bef2-4cbe7f0038cf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.814 2 DEBUG nova.network.neutron [req-81069918-9143-4e13-b82f-629d695d2a1c req-fbb21ddf-7861-4b13-92e6-a89d18e73b73 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Updating instance_info_cache with network_info: [{"id": "eb2cd434-444d-4138-bbe8-948bf47d3986", "address": "fa:16:3e:8b:9e:3d", "network": {"id": "c1ba46b2-7e02-4d4f-b296-3e1e1f027d22", "bridge": "br-int", "label": "tempest-network-smoke--2006106686", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2cd434-44", "ovs_interfaceid": "eb2cd434-444d-4138-bbe8-948bf47d3986", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "address": "fa:16:3e:33:d2:11", "network": {"id": "2d451f14-1551-484b-9a8f-b854ec5a8acc", "bridge": "br-int", "label": "tempest-network-smoke--1627200920", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ea527cd-71", "ovs_interfaceid": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:13:10 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:10 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:10 compute-1 nova_compute[235132]: 2025-10-10 10:13:10.843 2 DEBUG oslo_concurrency.lockutils [req-81069918-9143-4e13-b82f-629d695d2a1c req-fbb21ddf-7861-4b13-92e6-a89d18e73b73 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:13:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/101311 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:13:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:11 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340001b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:11 compute-1 unix_chkpwd[240819]: password check failed for user (root)
Oct 10 10:13:11 compute-1 sshd-session[240816]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.159  user=root
Oct 10 10:13:11 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:11 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340001b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:13:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:11.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:13:12 compute-1 ceph-mon[79167]: pgmap v817: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 3.0 KiB/s wr, 0 op/s
Oct 10 10:13:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:13:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:12.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:13:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.695 2 DEBUG nova.compute.manager [req-06baae13-60fa-4f58-b12c-292a36815dff req-610930ab-9926-4125-8c35-0119d1e21ab8 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received event network-vif-unplugged-9ea527cd-71d7-4979-bef2-4cbe7f0038cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.696 2 DEBUG oslo_concurrency.lockutils [req-06baae13-60fa-4f58-b12c-292a36815dff req-610930ab-9926-4125-8c35-0119d1e21ab8 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.696 2 DEBUG oslo_concurrency.lockutils [req-06baae13-60fa-4f58-b12c-292a36815dff req-610930ab-9926-4125-8c35-0119d1e21ab8 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.697 2 DEBUG oslo_concurrency.lockutils [req-06baae13-60fa-4f58-b12c-292a36815dff req-610930ab-9926-4125-8c35-0119d1e21ab8 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.697 2 DEBUG nova.compute.manager [req-06baae13-60fa-4f58-b12c-292a36815dff req-610930ab-9926-4125-8c35-0119d1e21ab8 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] No waiting events found dispatching network-vif-unplugged-9ea527cd-71d7-4979-bef2-4cbe7f0038cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.697 2 WARNING nova.compute.manager [req-06baae13-60fa-4f58-b12c-292a36815dff req-610930ab-9926-4125-8c35-0119d1e21ab8 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received unexpected event network-vif-unplugged-9ea527cd-71d7-4979-bef2-4cbe7f0038cf for instance with vm_state active and task_state None.
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.697 2 DEBUG nova.compute.manager [req-06baae13-60fa-4f58-b12c-292a36815dff req-610930ab-9926-4125-8c35-0119d1e21ab8 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received event network-vif-plugged-9ea527cd-71d7-4979-bef2-4cbe7f0038cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.698 2 DEBUG oslo_concurrency.lockutils [req-06baae13-60fa-4f58-b12c-292a36815dff req-610930ab-9926-4125-8c35-0119d1e21ab8 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.698 2 DEBUG oslo_concurrency.lockutils [req-06baae13-60fa-4f58-b12c-292a36815dff req-610930ab-9926-4125-8c35-0119d1e21ab8 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.698 2 DEBUG oslo_concurrency.lockutils [req-06baae13-60fa-4f58-b12c-292a36815dff req-610930ab-9926-4125-8c35-0119d1e21ab8 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.699 2 DEBUG nova.compute.manager [req-06baae13-60fa-4f58-b12c-292a36815dff req-610930ab-9926-4125-8c35-0119d1e21ab8 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] No waiting events found dispatching network-vif-plugged-9ea527cd-71d7-4979-bef2-4cbe7f0038cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.699 2 WARNING nova.compute.manager [req-06baae13-60fa-4f58-b12c-292a36815dff req-610930ab-9926-4125-8c35-0119d1e21ab8 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received unexpected event network-vif-plugged-9ea527cd-71d7-4979-bef2-4cbe7f0038cf for instance with vm_state active and task_state None.
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.761 2 DEBUG oslo_concurrency.lockutils [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.762 2 DEBUG oslo_concurrency.lockutils [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquired lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.762 2 DEBUG nova.network.neutron [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.830 2 DEBUG nova.compute.manager [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received event network-vif-deleted-9ea527cd-71d7-4979-bef2-4cbe7f0038cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.830 2 INFO nova.compute.manager [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Neutron deleted interface 9ea527cd-71d7-4979-bef2-4cbe7f0038cf; detaching it from the instance and deleting it from the info cache
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.831 2 DEBUG nova.network.neutron [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Updating instance_info_cache with network_info: [{"id": "eb2cd434-444d-4138-bbe8-948bf47d3986", "address": "fa:16:3e:8b:9e:3d", "network": {"id": "c1ba46b2-7e02-4d4f-b296-3e1e1f027d22", "bridge": "br-int", "label": "tempest-network-smoke--2006106686", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2cd434-44", "ovs_interfaceid": "eb2cd434-444d-4138-bbe8-948bf47d3986", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:13:12 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:12 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.857 2 DEBUG nova.objects.instance [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lazy-loading 'system_metadata' on Instance uuid 2fe2b257-7e1f-46c2-aed9-0593c533e290 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.910 2 DEBUG nova.objects.instance [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lazy-loading 'flavor' on Instance uuid 2fe2b257-7e1f-46c2-aed9-0593c533e290 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.948 2 DEBUG nova.virt.libvirt.vif [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:12:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1167416058',display_name='tempest-TestNetworkBasicOps-server-1167416058',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1167416058',id=3,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMTGlkGkqxsntvfgB83G1SiWbnvW7dUHWa7mCt3Bc6Si4+/bYujJEPULms51s1qxf1oywpdTq7/IqVAkQU+odhhf7e0wmrvEXEJnRsSzZiDWmk6FDUoWkd1hV01XsyivgQ==',key_name='tempest-TestNetworkBasicOps-919848835',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:12:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-jd07fe8o',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:12:36Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=2fe2b257-7e1f-46c2-aed9-0593c533e290,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "address": "fa:16:3e:33:d2:11", "network": {"id": "2d451f14-1551-484b-9a8f-b854ec5a8acc", "bridge": "br-int", "label": "tempest-network-smoke--1627200920", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ea527cd-71", "ovs_interfaceid": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.949 2 DEBUG nova.network.os_vif_util [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Converting VIF {"id": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "address": "fa:16:3e:33:d2:11", "network": {"id": "2d451f14-1551-484b-9a8f-b854ec5a8acc", "bridge": "br-int", "label": "tempest-network-smoke--1627200920", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ea527cd-71", "ovs_interfaceid": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.950 2 DEBUG nova.network.os_vif_util [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:d2:11,bridge_name='br-int',has_traffic_filtering=True,id=9ea527cd-71d7-4979-bef2-4cbe7f0038cf,network=Network(2d451f14-1551-484b-9a8f-b854ec5a8acc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ea527cd-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.954 2 DEBUG nova.virt.libvirt.guest [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:33:d2:11"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9ea527cd-71"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.957 2 DEBUG nova.virt.libvirt.guest [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:33:d2:11"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9ea527cd-71"/></interface>not found in domain: <domain type='kvm' id='2'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <name>instance-00000003</name>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <uuid>2fe2b257-7e1f-46c2-aed9-0593c533e290</uuid>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <metadata>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <nova:name>tempest-TestNetworkBasicOps-server-1167416058</nova:name>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <nova:creationTime>2025-10-10 10:13:10</nova:creationTime>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <nova:flavor name="m1.nano">
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <nova:memory>128</nova:memory>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <nova:disk>1</nova:disk>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <nova:swap>0</nova:swap>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <nova:ephemeral>0</nova:ephemeral>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <nova:vcpus>1</nova:vcpus>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   </nova:flavor>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <nova:owner>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   </nova:owner>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <nova:ports>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <nova:port uuid="eb2cd434-444d-4138-bbe8-948bf47d3986">
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </nova:port>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   </nova:ports>
Oct 10 10:13:12 compute-1 nova_compute[235132]: </nova:instance>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   </metadata>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <memory unit='KiB'>131072</memory>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <vcpu placement='static'>1</vcpu>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <resource>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <partition>/machine</partition>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   </resource>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <sysinfo type='smbios'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <system>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <entry name='manufacturer'>RDO</entry>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <entry name='product'>OpenStack Compute</entry>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <entry name='serial'>2fe2b257-7e1f-46c2-aed9-0593c533e290</entry>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <entry name='uuid'>2fe2b257-7e1f-46c2-aed9-0593c533e290</entry>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <entry name='family'>Virtual Machine</entry>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </system>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   </sysinfo>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <os>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <boot dev='hd'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <smbios mode='sysinfo'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   </os>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <features>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <acpi/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <apic/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <vmcoreinfo state='on'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   </features>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <cpu mode='custom' match='exact' check='full'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <vendor>AMD</vendor>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='x2apic'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='tsc-deadline'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='hypervisor'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='tsc_adjust'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='spec-ctrl'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='stibp'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='arch-capabilities'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='ssbd'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='cmp_legacy'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='overflow-recov'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='succor'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='ibrs'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='amd-ssbd'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='virt-ssbd'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='disable' name='lbrv'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='disable' name='tsc-scale'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='disable' name='vmcb-clean'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='disable' name='flushbyasid'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='disable' name='pause-filter'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='disable' name='pfthreshold'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='rdctl-no'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='mds-no'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='pschange-mc-no'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='gds-no'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='rfds-no'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='disable' name='xsaves'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='disable' name='svm'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='topoext'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='disable' name='npt'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='disable' name='nrip-save'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   </cpu>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <clock offset='utc'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <timer name='pit' tickpolicy='delay'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <timer name='hpet' present='no'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   </clock>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <on_poweroff>destroy</on_poweroff>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <on_reboot>restart</on_reboot>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <on_crash>destroy</on_crash>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <devices>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <disk type='network' device='disk'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <driver name='qemu' type='raw' cache='none'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <auth username='openstack'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:         <secret type='ceph' uuid='21f084a3-af34-5230-afe4-ea5cd24a55f4'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       </auth>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <source protocol='rbd' name='vms/2fe2b257-7e1f-46c2-aed9-0593c533e290_disk' index='2'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:         <host name='192.168.122.100' port='6789'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:         <host name='192.168.122.102' port='6789'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:         <host name='192.168.122.101' port='6789'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       </source>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target dev='vda' bus='virtio'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='virtio-disk0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </disk>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <disk type='network' device='cdrom'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <driver name='qemu' type='raw' cache='none'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <auth username='openstack'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:         <secret type='ceph' uuid='21f084a3-af34-5230-afe4-ea5cd24a55f4'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       </auth>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <source protocol='rbd' name='vms/2fe2b257-7e1f-46c2-aed9-0593c533e290_disk.config' index='1'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:         <host name='192.168.122.100' port='6789'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:         <host name='192.168.122.102' port='6789'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:         <host name='192.168.122.101' port='6789'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       </source>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target dev='sda' bus='sata'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <readonly/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='sata0-0-0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </disk>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='0' model='pcie-root'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pcie.0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='1' port='0x10'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.1'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='2' port='0x11'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.2'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='3' port='0x12'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.3'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='4' port='0x13'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.4'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='5' port='0x14'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.5'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='6' port='0x15'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.6'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='7' port='0x16'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.7'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='8' port='0x17'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.8'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='9' port='0x18'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.9'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='10' port='0x19'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.10'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='11' port='0x1a'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.11'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='12' port='0x1b'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.12'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='13' port='0x1c'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.13'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='14' port='0x1d'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.14'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='15' port='0x1e'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.15'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='16' port='0x1f'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.16'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='17' port='0x20'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.17'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='18' port='0x21'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.18'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='19' port='0x22'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.19'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='20' port='0x23'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.20'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='21' port='0x24'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.21'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='22' port='0x25'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.22'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='23' port='0x26'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.23'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='24' port='0x27'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.24'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='25' port='0x28'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.25'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-pci-bridge'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.26'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='usb'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='sata' index='0'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='ide'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <interface type='ethernet'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <mac address='fa:16:3e:8b:9e:3d'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target dev='tapeb2cd434-44'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model type='virtio'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <driver name='vhost' rx_queue_size='512'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <mtu size='1442'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='net0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </interface>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <serial type='pty'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <source path='/dev/pts/0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <log file='/var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290/console.log' append='off'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target type='isa-serial' port='0'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:         <model name='isa-serial'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       </target>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='serial0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </serial>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <console type='pty' tty='/dev/pts/0'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <source path='/dev/pts/0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <log file='/var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290/console.log' append='off'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target type='serial' port='0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='serial0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </console>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <input type='tablet' bus='usb'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='input0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='usb' bus='0' port='1'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </input>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <input type='mouse' bus='ps2'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='input1'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </input>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <input type='keyboard' bus='ps2'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='input2'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </input>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <listen type='address' address='::0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </graphics>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <audio id='1' type='none'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <video>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model type='virtio' heads='1' primary='yes'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='video0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </video>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <watchdog model='itco' action='reset'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='watchdog0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </watchdog>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <memballoon model='virtio'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <stats period='10'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='balloon0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </memballoon>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <rng model='virtio'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <backend model='random'>/dev/urandom</backend>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='rng0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </rng>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   </devices>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <label>system_u:system_r:svirt_t:s0:c160,c921</label>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c160,c921</imagelabel>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   </seclabel>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <label>+107:+107</label>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <imagelabel>+107:+107</imagelabel>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   </seclabel>
Oct 10 10:13:12 compute-1 nova_compute[235132]: </domain>
Oct 10 10:13:12 compute-1 nova_compute[235132]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.958 2 DEBUG nova.virt.libvirt.guest [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:33:d2:11"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9ea527cd-71"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.965 2 DEBUG nova.virt.libvirt.guest [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:33:d2:11"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9ea527cd-71"/></interface>not found in domain: <domain type='kvm' id='2'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <name>instance-00000003</name>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <uuid>2fe2b257-7e1f-46c2-aed9-0593c533e290</uuid>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <metadata>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <nova:name>tempest-TestNetworkBasicOps-server-1167416058</nova:name>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <nova:creationTime>2025-10-10 10:13:10</nova:creationTime>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <nova:flavor name="m1.nano">
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <nova:memory>128</nova:memory>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <nova:disk>1</nova:disk>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <nova:swap>0</nova:swap>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <nova:ephemeral>0</nova:ephemeral>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <nova:vcpus>1</nova:vcpus>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   </nova:flavor>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <nova:owner>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   </nova:owner>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <nova:ports>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <nova:port uuid="eb2cd434-444d-4138-bbe8-948bf47d3986">
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </nova:port>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   </nova:ports>
Oct 10 10:13:12 compute-1 nova_compute[235132]: </nova:instance>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   </metadata>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <memory unit='KiB'>131072</memory>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <vcpu placement='static'>1</vcpu>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <resource>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <partition>/machine</partition>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   </resource>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <sysinfo type='smbios'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <system>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <entry name='manufacturer'>RDO</entry>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <entry name='product'>OpenStack Compute</entry>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <entry name='serial'>2fe2b257-7e1f-46c2-aed9-0593c533e290</entry>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <entry name='uuid'>2fe2b257-7e1f-46c2-aed9-0593c533e290</entry>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <entry name='family'>Virtual Machine</entry>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </system>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   </sysinfo>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <os>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <boot dev='hd'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <smbios mode='sysinfo'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   </os>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <features>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <acpi/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <apic/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <vmcoreinfo state='on'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   </features>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <cpu mode='custom' match='exact' check='full'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <vendor>AMD</vendor>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='x2apic'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='tsc-deadline'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='hypervisor'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='tsc_adjust'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='spec-ctrl'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='stibp'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='arch-capabilities'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='ssbd'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='cmp_legacy'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='overflow-recov'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='succor'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='ibrs'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='amd-ssbd'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='virt-ssbd'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='disable' name='lbrv'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='disable' name='tsc-scale'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='disable' name='vmcb-clean'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='disable' name='flushbyasid'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='disable' name='pause-filter'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='disable' name='pfthreshold'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='rdctl-no'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='mds-no'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='pschange-mc-no'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='gds-no'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='rfds-no'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='disable' name='xsaves'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='disable' name='svm'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='require' name='topoext'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='disable' name='npt'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <feature policy='disable' name='nrip-save'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   </cpu>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <clock offset='utc'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <timer name='pit' tickpolicy='delay'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <timer name='hpet' present='no'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   </clock>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <on_poweroff>destroy</on_poweroff>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <on_reboot>restart</on_reboot>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <on_crash>destroy</on_crash>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <devices>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <disk type='network' device='disk'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <driver name='qemu' type='raw' cache='none'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <auth username='openstack'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:         <secret type='ceph' uuid='21f084a3-af34-5230-afe4-ea5cd24a55f4'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       </auth>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <source protocol='rbd' name='vms/2fe2b257-7e1f-46c2-aed9-0593c533e290_disk' index='2'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:         <host name='192.168.122.100' port='6789'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:         <host name='192.168.122.102' port='6789'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:         <host name='192.168.122.101' port='6789'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       </source>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target dev='vda' bus='virtio'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='virtio-disk0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </disk>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <disk type='network' device='cdrom'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <driver name='qemu' type='raw' cache='none'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <auth username='openstack'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:         <secret type='ceph' uuid='21f084a3-af34-5230-afe4-ea5cd24a55f4'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       </auth>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <source protocol='rbd' name='vms/2fe2b257-7e1f-46c2-aed9-0593c533e290_disk.config' index='1'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:         <host name='192.168.122.100' port='6789'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:         <host name='192.168.122.102' port='6789'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:         <host name='192.168.122.101' port='6789'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       </source>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target dev='sda' bus='sata'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <readonly/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='sata0-0-0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </disk>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='0' model='pcie-root'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pcie.0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='1' port='0x10'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.1'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='2' port='0x11'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.2'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='3' port='0x12'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.3'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='4' port='0x13'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.4'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='5' port='0x14'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.5'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='6' port='0x15'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.6'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='7' port='0x16'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.7'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='8' port='0x17'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.8'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='9' port='0x18'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.9'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='10' port='0x19'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.10'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='11' port='0x1a'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.11'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='12' port='0x1b'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.12'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='13' port='0x1c'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.13'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='14' port='0x1d'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.14'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='15' port='0x1e'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.15'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='16' port='0x1f'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.16'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='17' port='0x20'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.17'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='18' port='0x21'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.18'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='19' port='0x22'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.19'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='20' port='0x23'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.20'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='21' port='0x24'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.21'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='22' port='0x25'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.22'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='23' port='0x26'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.23'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='24' port='0x27'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.24'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target chassis='25' port='0x28'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.25'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model name='pcie-pci-bridge'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='pci.26'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='usb'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <controller type='sata' index='0'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='ide'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <interface type='ethernet'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <mac address='fa:16:3e:8b:9e:3d'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target dev='tapeb2cd434-44'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model type='virtio'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <driver name='vhost' rx_queue_size='512'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <mtu size='1442'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='net0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </interface>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <serial type='pty'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <source path='/dev/pts/0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <log file='/var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290/console.log' append='off'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target type='isa-serial' port='0'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:         <model name='isa-serial'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       </target>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='serial0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </serial>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <console type='pty' tty='/dev/pts/0'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <source path='/dev/pts/0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <log file='/var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290/console.log' append='off'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <target type='serial' port='0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='serial0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </console>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <input type='tablet' bus='usb'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='input0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='usb' bus='0' port='1'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </input>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <input type='mouse' bus='ps2'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='input1'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </input>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <input type='keyboard' bus='ps2'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='input2'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </input>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <listen type='address' address='::0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </graphics>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <audio id='1' type='none'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <video>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <model type='virtio' heads='1' primary='yes'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='video0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </video>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <watchdog model='itco' action='reset'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='watchdog0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </watchdog>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <memballoon model='virtio'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <stats period='10'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='balloon0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </memballoon>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <rng model='virtio'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <backend model='random'>/dev/urandom</backend>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <alias name='rng0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </rng>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   </devices>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <label>system_u:system_r:svirt_t:s0:c160,c921</label>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c160,c921</imagelabel>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   </seclabel>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <label>+107:+107</label>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <imagelabel>+107:+107</imagelabel>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   </seclabel>
Oct 10 10:13:12 compute-1 nova_compute[235132]: </domain>
Oct 10 10:13:12 compute-1 nova_compute[235132]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.966 2 WARNING nova.virt.libvirt.driver [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Detaching interface fa:16:3e:33:d2:11 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap9ea527cd-71' not found.
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.966 2 DEBUG nova.virt.libvirt.vif [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:12:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1167416058',display_name='tempest-TestNetworkBasicOps-server-1167416058',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1167416058',id=3,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMTGlkGkqxsntvfgB83G1SiWbnvW7dUHWa7mCt3Bc6Si4+/bYujJEPULms51s1qxf1oywpdTq7/IqVAkQU+odhhf7e0wmrvEXEJnRsSzZiDWmk6FDUoWkd1hV01XsyivgQ==',key_name='tempest-TestNetworkBasicOps-919848835',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:12:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-jd07fe8o',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:12:36Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=2fe2b257-7e1f-46c2-aed9-0593c533e290,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "address": "fa:16:3e:33:d2:11", "network": {"id": "2d451f14-1551-484b-9a8f-b854ec5a8acc", "bridge": "br-int", "label": "tempest-network-smoke--1627200920", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ea527cd-71", "ovs_interfaceid": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.967 2 DEBUG nova.network.os_vif_util [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Converting VIF {"id": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "address": "fa:16:3e:33:d2:11", "network": {"id": "2d451f14-1551-484b-9a8f-b854ec5a8acc", "bridge": "br-int", "label": "tempest-network-smoke--1627200920", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ea527cd-71", "ovs_interfaceid": "9ea527cd-71d7-4979-bef2-4cbe7f0038cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.968 2 DEBUG nova.network.os_vif_util [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:d2:11,bridge_name='br-int',has_traffic_filtering=True,id=9ea527cd-71d7-4979-bef2-4cbe7f0038cf,network=Network(2d451f14-1551-484b-9a8f-b854ec5a8acc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ea527cd-71') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.968 2 DEBUG os_vif [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:d2:11,bridge_name='br-int',has_traffic_filtering=True,id=9ea527cd-71d7-4979-bef2-4cbe7f0038cf,network=Network(2d451f14-1551-484b-9a8f-b854ec5a8acc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ea527cd-71') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.970 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ea527cd-71, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.970 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.973 2 INFO os_vif [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:d2:11,bridge_name='br-int',has_traffic_filtering=True,id=9ea527cd-71d7-4979-bef2-4cbe7f0038cf,network=Network(2d451f14-1551-484b-9a8f-b854ec5a8acc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ea527cd-71')
Oct 10 10:13:12 compute-1 nova_compute[235132]: 2025-10-10 10:13:12.974 2 DEBUG nova.virt.libvirt.guest [req-c905014b-4245-4071-a013-69236fb399c9 req-0c736a7d-ac57-48f3-bcee-75bcbef4b0ce 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <nova:name>tempest-TestNetworkBasicOps-server-1167416058</nova:name>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <nova:creationTime>2025-10-10 10:13:12</nova:creationTime>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <nova:flavor name="m1.nano">
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <nova:memory>128</nova:memory>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <nova:disk>1</nova:disk>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <nova:swap>0</nova:swap>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <nova:ephemeral>0</nova:ephemeral>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <nova:vcpus>1</nova:vcpus>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   </nova:flavor>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <nova:owner>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   </nova:owner>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   <nova:ports>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     <nova:port uuid="eb2cd434-444d-4138-bbe8-948bf47d3986">
Oct 10 10:13:12 compute-1 nova_compute[235132]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 10 10:13:12 compute-1 nova_compute[235132]:     </nova:port>
Oct 10 10:13:12 compute-1 nova_compute[235132]:   </nova:ports>
Oct 10 10:13:12 compute-1 nova_compute[235132]: </nova:instance>
Oct 10 10:13:12 compute-1 nova_compute[235132]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 10 10:13:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:13 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:13 compute-1 ceph-mon[79167]: pgmap v818: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 3.0 KiB/s wr, 1 op/s
Oct 10 10:13:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:13 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:13:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:13.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:13:14 compute-1 sshd-session[240816]: Failed password for root from 193.46.255.159 port 57670 ssh2
Oct 10 10:13:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:14.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:14 compute-1 ovn_controller[131749]: 2025-10-10T10:13:14Z|00053|binding|INFO|Releasing lport ca6a8c9e-7d4d-4ccb-aa3e-a02bb6dd0c01 from this chassis (sb_readonly=0)
Oct 10 10:13:14 compute-1 nova_compute[235132]: 2025-10-10 10:13:14.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:14 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:14 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320001bd0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:14 compute-1 nova_compute[235132]: 2025-10-10 10:13:14.905 2 INFO nova.network.neutron [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Port 9ea527cd-71d7-4979-bef2-4cbe7f0038cf from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 10 10:13:14 compute-1 nova_compute[235132]: 2025-10-10 10:13:14.906 2 DEBUG nova.network.neutron [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Updating instance_info_cache with network_info: [{"id": "eb2cd434-444d-4138-bbe8-948bf47d3986", "address": "fa:16:3e:8b:9e:3d", "network": {"id": "c1ba46b2-7e02-4d4f-b296-3e1e1f027d22", "bridge": "br-int", "label": "tempest-network-smoke--2006106686", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2cd434-44", "ovs_interfaceid": "eb2cd434-444d-4138-bbe8-948bf47d3986", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:13:14 compute-1 nova_compute[235132]: 2025-10-10 10:13:14.936 2 DEBUG oslo_concurrency.lockutils [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Releasing lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:13:14 compute-1 nova_compute[235132]: 2025-10-10 10:13:14.963 2 DEBUG oslo_concurrency.lockutils [None req-8d95ab4f-b532-4878-a1ee-dde6cfc6bda3 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "interface-2fe2b257-7e1f-46c2-aed9-0593c533e290-9ea527cd-71d7-4979-bef2-4cbe7f0038cf" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.845s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:13:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:15 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.373 2 DEBUG nova.compute.manager [req-0d269ad5-7ca7-493d-804c-dbabaee340a8 req-ae60f373-d4b1-40b3-b575-8ea0bb85981f 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received event network-changed-eb2cd434-444d-4138-bbe8-948bf47d3986 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.373 2 DEBUG nova.compute.manager [req-0d269ad5-7ca7-493d-804c-dbabaee340a8 req-ae60f373-d4b1-40b3-b575-8ea0bb85981f 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Refreshing instance network info cache due to event network-changed-eb2cd434-444d-4138-bbe8-948bf47d3986. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.374 2 DEBUG oslo_concurrency.lockutils [req-0d269ad5-7ca7-493d-804c-dbabaee340a8 req-ae60f373-d4b1-40b3-b575-8ea0bb85981f 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.374 2 DEBUG oslo_concurrency.lockutils [req-0d269ad5-7ca7-493d-804c-dbabaee340a8 req-ae60f373-d4b1-40b3-b575-8ea0bb85981f 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.375 2 DEBUG nova.network.neutron [req-0d269ad5-7ca7-493d-804c-dbabaee340a8 req-ae60f373-d4b1-40b3-b575-8ea0bb85981f 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Refreshing network info cache for port eb2cd434-444d-4138-bbe8-948bf47d3986 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.454 2 DEBUG oslo_concurrency.lockutils [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "2fe2b257-7e1f-46c2-aed9-0593c533e290" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.455 2 DEBUG oslo_concurrency.lockutils [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.455 2 DEBUG oslo_concurrency.lockutils [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.455 2 DEBUG oslo_concurrency.lockutils [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.456 2 DEBUG oslo_concurrency.lockutils [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.458 2 INFO nova.compute.manager [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Terminating instance
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.460 2 DEBUG nova.compute.manager [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 10 10:13:15 compute-1 kernel: tapeb2cd434-44 (unregistering): left promiscuous mode
Oct 10 10:13:15 compute-1 NetworkManager[44982]: <info>  [1760091195.5278] device (tapeb2cd434-44): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:15 compute-1 ovn_controller[131749]: 2025-10-10T10:13:15Z|00054|binding|INFO|Releasing lport eb2cd434-444d-4138-bbe8-948bf47d3986 from this chassis (sb_readonly=0)
Oct 10 10:13:15 compute-1 ovn_controller[131749]: 2025-10-10T10:13:15Z|00055|binding|INFO|Setting lport eb2cd434-444d-4138-bbe8-948bf47d3986 down in Southbound
Oct 10 10:13:15 compute-1 ovn_controller[131749]: 2025-10-10T10:13:15Z|00056|binding|INFO|Removing iface tapeb2cd434-44 ovn-installed in OVS
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:15 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:15.555 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:9e:3d 10.100.0.6'], port_security=['fa:16:3e:8b:9e:3d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2fe2b257-7e1f-46c2-aed9-0593c533e290', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b2e1b849-99bd-43fd-883d-af1bb6750e12', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=86b59927-b11d-4637-a561-9adc673cffb1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], logical_port=eb2cd434-444d-4138-bbe8-948bf47d3986) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:13:15 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:15.557 141156 INFO neutron.agent.ovn.metadata.agent [-] Port eb2cd434-444d-4138-bbe8-948bf47d3986 in datapath c1ba46b2-7e02-4d4f-b296-3e1e1f027d22 unbound from our chassis
Oct 10 10:13:15 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:15.559 141156 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c1ba46b2-7e02-4d4f-b296-3e1e1f027d22, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 10 10:13:15 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:15.560 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[23638119-d9f1-4d5f-a558-5f3e0318d326]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:15 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:15.561 141156 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22 namespace which is not needed anymore
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:15 compute-1 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Deactivated successfully.
Oct 10 10:13:15 compute-1 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Consumed 15.487s CPU time.
Oct 10 10:13:15 compute-1 systemd-machined[191637]: Machine qemu-2-instance-00000003 terminated.
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.700 2 INFO nova.virt.libvirt.driver [-] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Instance destroyed successfully.
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.701 2 DEBUG nova.objects.instance [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'resources' on Instance uuid 2fe2b257-7e1f-46c2-aed9-0593c533e290 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:13:15 compute-1 neutron-haproxy-ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22[240407]: [NOTICE]   (240411) : haproxy version is 2.8.14-c23fe91
Oct 10 10:13:15 compute-1 neutron-haproxy-ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22[240407]: [NOTICE]   (240411) : path to executable is /usr/sbin/haproxy
Oct 10 10:13:15 compute-1 neutron-haproxy-ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22[240407]: [WARNING]  (240411) : Exiting Master process...
Oct 10 10:13:15 compute-1 neutron-haproxy-ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22[240407]: [WARNING]  (240411) : Exiting Master process...
Oct 10 10:13:15 compute-1 neutron-haproxy-ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22[240407]: [ALERT]    (240411) : Current worker (240413) exited with code 143 (Terminated)
Oct 10 10:13:15 compute-1 neutron-haproxy-ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22[240407]: [WARNING]  (240411) : All workers exited. Exiting... (0)
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.731 2 DEBUG nova.virt.libvirt.vif [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:12:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1167416058',display_name='tempest-TestNetworkBasicOps-server-1167416058',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1167416058',id=3,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMTGlkGkqxsntvfgB83G1SiWbnvW7dUHWa7mCt3Bc6Si4+/bYujJEPULms51s1qxf1oywpdTq7/IqVAkQU+odhhf7e0wmrvEXEJnRsSzZiDWmk6FDUoWkd1hV01XsyivgQ==',key_name='tempest-TestNetworkBasicOps-919848835',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:12:36Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-jd07fe8o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:12:36Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=2fe2b257-7e1f-46c2-aed9-0593c533e290,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "eb2cd434-444d-4138-bbe8-948bf47d3986", "address": "fa:16:3e:8b:9e:3d", "network": {"id": "c1ba46b2-7e02-4d4f-b296-3e1e1f027d22", "bridge": "br-int", "label": "tempest-network-smoke--2006106686", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2cd434-44", "ovs_interfaceid": "eb2cd434-444d-4138-bbe8-948bf47d3986", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.732 2 DEBUG nova.network.os_vif_util [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "eb2cd434-444d-4138-bbe8-948bf47d3986", "address": "fa:16:3e:8b:9e:3d", "network": {"id": "c1ba46b2-7e02-4d4f-b296-3e1e1f027d22", "bridge": "br-int", "label": "tempest-network-smoke--2006106686", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2cd434-44", "ovs_interfaceid": "eb2cd434-444d-4138-bbe8-948bf47d3986", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.733 2 DEBUG nova.network.os_vif_util [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8b:9e:3d,bridge_name='br-int',has_traffic_filtering=True,id=eb2cd434-444d-4138-bbe8-948bf47d3986,network=Network(c1ba46b2-7e02-4d4f-b296-3e1e1f027d22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb2cd434-44') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.733 2 DEBUG os_vif [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:9e:3d,bridge_name='br-int',has_traffic_filtering=True,id=eb2cd434-444d-4138-bbe8-948bf47d3986,network=Network(c1ba46b2-7e02-4d4f-b296-3e1e1f027d22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb2cd434-44') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:15 compute-1 systemd[1]: libpod-a01570374548315ef7bf6635fa67f6eb7a97cd3857a5c57a88d956949567c026.scope: Deactivated successfully.
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.737 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb2cd434-44, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:13:15 compute-1 podman[240848]: 2025-10-10 10:13:15.739672314 +0000 UTC m=+0.061152424 container died a01570374548315ef7bf6635fa67f6eb7a97cd3857a5c57a88d956949567c026 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.748 2 INFO os_vif [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:9e:3d,bridge_name='br-int',has_traffic_filtering=True,id=eb2cd434-444d-4138-bbe8-948bf47d3986,network=Network(c1ba46b2-7e02-4d4f-b296-3e1e1f027d22),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb2cd434-44')
Oct 10 10:13:15 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a01570374548315ef7bf6635fa67f6eb7a97cd3857a5c57a88d956949567c026-userdata-shm.mount: Deactivated successfully.
Oct 10 10:13:15 compute-1 systemd[1]: var-lib-containers-storage-overlay-4f454a5f6c0eb08c56ed00e9648965604ea84ac6e2edf2652dc6afe6afb2c063-merged.mount: Deactivated successfully.
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.803 2 DEBUG nova.compute.manager [req-2d43edd2-0bbd-4b2c-be79-9526667a46da req-9b91bc60-eff1-466a-aa4e-0216483274b6 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received event network-vif-unplugged-eb2cd434-444d-4138-bbe8-948bf47d3986 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.803 2 DEBUG oslo_concurrency.lockutils [req-2d43edd2-0bbd-4b2c-be79-9526667a46da req-9b91bc60-eff1-466a-aa4e-0216483274b6 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.803 2 DEBUG oslo_concurrency.lockutils [req-2d43edd2-0bbd-4b2c-be79-9526667a46da req-9b91bc60-eff1-466a-aa4e-0216483274b6 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.804 2 DEBUG oslo_concurrency.lockutils [req-2d43edd2-0bbd-4b2c-be79-9526667a46da req-9b91bc60-eff1-466a-aa4e-0216483274b6 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.804 2 DEBUG nova.compute.manager [req-2d43edd2-0bbd-4b2c-be79-9526667a46da req-9b91bc60-eff1-466a-aa4e-0216483274b6 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] No waiting events found dispatching network-vif-unplugged-eb2cd434-444d-4138-bbe8-948bf47d3986 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.804 2 DEBUG nova.compute.manager [req-2d43edd2-0bbd-4b2c-be79-9526667a46da req-9b91bc60-eff1-466a-aa4e-0216483274b6 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received event network-vif-unplugged-eb2cd434-444d-4138-bbe8-948bf47d3986 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 10 10:13:15 compute-1 podman[240848]: 2025-10-10 10:13:15.804803524 +0000 UTC m=+0.126283624 container cleanup a01570374548315ef7bf6635fa67f6eb7a97cd3857a5c57a88d956949567c026 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 10 10:13:15 compute-1 systemd[1]: libpod-conmon-a01570374548315ef7bf6635fa67f6eb7a97cd3857a5c57a88d956949567c026.scope: Deactivated successfully.
Oct 10 10:13:15 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:15 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:15 compute-1 podman[240907]: 2025-10-10 10:13:15.873749188 +0000 UTC m=+0.045971487 container remove a01570374548315ef7bf6635fa67f6eb7a97cd3857a5c57a88d956949567c026 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 10 10:13:15 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:15.883 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[f84347cc-bb9a-442f-b00e-96dbee02b206]: (4, ('Fri Oct 10 10:13:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22 (a01570374548315ef7bf6635fa67f6eb7a97cd3857a5c57a88d956949567c026)\na01570374548315ef7bf6635fa67f6eb7a97cd3857a5c57a88d956949567c026\nFri Oct 10 10:13:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22 (a01570374548315ef7bf6635fa67f6eb7a97cd3857a5c57a88d956949567c026)\na01570374548315ef7bf6635fa67f6eb7a97cd3857a5c57a88d956949567c026\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:15 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:15.885 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[9c638ee6-3b17-4c86-8567-715205bbd89c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:15 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:15.887 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1ba46b2-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:13:15 compute-1 kernel: tapc1ba46b2-70: left promiscuous mode
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:15 compute-1 nova_compute[235132]: 2025-10-10 10:13:15.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:15 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:15.924 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[63f00ea2-416c-4ccf-9fd0-34e9a26e5a90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:15 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:15.956 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[ac7b3f55-87e9-435b-9183-d7472fa2262c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:15 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:15.957 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[7af6bacc-b7c5-4fce-9573-ca2fae4abfd8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:15.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:15 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:15.984 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[b0e44fa1-1b16-4511-91f9-f1bffa208ae8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 406919, 'reachable_time': 21764, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240921, 'error': None, 'target': 'ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:15 compute-1 systemd[1]: run-netns-ovnmeta\x2dc1ba46b2\x2d7e02\x2d4d4f\x2db296\x2d3e1e1f027d22.mount: Deactivated successfully.
Oct 10 10:13:15 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:15.991 141275 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c1ba46b2-7e02-4d4f-b296-3e1e1f027d22 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 10 10:13:15 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:15.992 141275 DEBUG oslo.privsep.daemon [-] privsep: reply[f41825d3-8943-4f90-9964-21cd4617d813]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:15 compute-1 ceph-mon[79167]: pgmap v819: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 1023 B/s wr, 1 op/s
Oct 10 10:13:16 compute-1 unix_chkpwd[240923]: password check failed for user (root)
Oct 10 10:13:16 compute-1 nova_compute[235132]: 2025-10-10 10:13:16.230 2 INFO nova.virt.libvirt.driver [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Deleting instance files /var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290_del
Oct 10 10:13:16 compute-1 nova_compute[235132]: 2025-10-10 10:13:16.231 2 INFO nova.virt.libvirt.driver [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Deletion of /var/lib/nova/instances/2fe2b257-7e1f-46c2-aed9-0593c533e290_del complete
Oct 10 10:13:16 compute-1 nova_compute[235132]: 2025-10-10 10:13:16.293 2 INFO nova.compute.manager [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Took 0.83 seconds to destroy the instance on the hypervisor.
Oct 10 10:13:16 compute-1 nova_compute[235132]: 2025-10-10 10:13:16.294 2 DEBUG oslo.service.loopingcall [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 10 10:13:16 compute-1 nova_compute[235132]: 2025-10-10 10:13:16.294 2 DEBUG nova.compute.manager [-] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 10 10:13:16 compute-1 nova_compute[235132]: 2025-10-10 10:13:16.295 2 DEBUG nova.network.neutron [-] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 10 10:13:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:13:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:16.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:13:16 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:16 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:17 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:13:17 compute-1 nova_compute[235132]: 2025-10-10 10:13:17.008 2 DEBUG nova.network.neutron [req-0d269ad5-7ca7-493d-804c-dbabaee340a8 req-ae60f373-d4b1-40b3-b575-8ea0bb85981f 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Updated VIF entry in instance network info cache for port eb2cd434-444d-4138-bbe8-948bf47d3986. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 10 10:13:17 compute-1 nova_compute[235132]: 2025-10-10 10:13:17.009 2 DEBUG nova.network.neutron [req-0d269ad5-7ca7-493d-804c-dbabaee340a8 req-ae60f373-d4b1-40b3-b575-8ea0bb85981f 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Updating instance_info_cache with network_info: [{"id": "eb2cd434-444d-4138-bbe8-948bf47d3986", "address": "fa:16:3e:8b:9e:3d", "network": {"id": "c1ba46b2-7e02-4d4f-b296-3e1e1f027d22", "bridge": "br-int", "label": "tempest-network-smoke--2006106686", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb2cd434-44", "ovs_interfaceid": "eb2cd434-444d-4138-bbe8-948bf47d3986", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:13:17 compute-1 nova_compute[235132]: 2025-10-10 10:13:17.036 2 DEBUG oslo_concurrency.lockutils [req-0d269ad5-7ca7-493d-804c-dbabaee340a8 req-ae60f373-d4b1-40b3-b575-8ea0bb85981f 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-2fe2b257-7e1f-46c2-aed9-0593c533e290" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:13:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:17 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320004cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:17 compute-1 nova_compute[235132]: 2025-10-10 10:13:17.211 2 DEBUG nova.network.neutron [-] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:13:17 compute-1 nova_compute[235132]: 2025-10-10 10:13:17.229 2 INFO nova.compute.manager [-] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Took 0.93 seconds to deallocate network for instance.
Oct 10 10:13:17 compute-1 nova_compute[235132]: 2025-10-10 10:13:17.282 2 DEBUG oslo_concurrency.lockutils [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:13:17 compute-1 nova_compute[235132]: 2025-10-10 10:13:17.282 2 DEBUG oslo_concurrency.lockutils [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:13:17 compute-1 nova_compute[235132]: 2025-10-10 10:13:17.401 2 DEBUG oslo_concurrency.processutils [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:13:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:13:17 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:17 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:17 compute-1 nova_compute[235132]: 2025-10-10 10:13:17.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:13:17 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2423340283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:13:17 compute-1 nova_compute[235132]: 2025-10-10 10:13:17.925 2 DEBUG nova.compute.manager [req-5fca1ec3-44ba-4433-af09-9f669b6cce50 req-80c12755-d534-4f52-957a-79ddf9311545 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received event network-vif-plugged-eb2cd434-444d-4138-bbe8-948bf47d3986 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:13:17 compute-1 nova_compute[235132]: 2025-10-10 10:13:17.926 2 DEBUG oslo_concurrency.lockutils [req-5fca1ec3-44ba-4433-af09-9f669b6cce50 req-80c12755-d534-4f52-957a-79ddf9311545 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:13:17 compute-1 nova_compute[235132]: 2025-10-10 10:13:17.926 2 DEBUG oslo_concurrency.lockutils [req-5fca1ec3-44ba-4433-af09-9f669b6cce50 req-80c12755-d534-4f52-957a-79ddf9311545 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:13:17 compute-1 nova_compute[235132]: 2025-10-10 10:13:17.927 2 DEBUG oslo_concurrency.lockutils [req-5fca1ec3-44ba-4433-af09-9f669b6cce50 req-80c12755-d534-4f52-957a-79ddf9311545 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:13:17 compute-1 nova_compute[235132]: 2025-10-10 10:13:17.927 2 DEBUG nova.compute.manager [req-5fca1ec3-44ba-4433-af09-9f669b6cce50 req-80c12755-d534-4f52-957a-79ddf9311545 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] No waiting events found dispatching network-vif-plugged-eb2cd434-444d-4138-bbe8-948bf47d3986 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:13:17 compute-1 nova_compute[235132]: 2025-10-10 10:13:17.927 2 WARNING nova.compute.manager [req-5fca1ec3-44ba-4433-af09-9f669b6cce50 req-80c12755-d534-4f52-957a-79ddf9311545 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received unexpected event network-vif-plugged-eb2cd434-444d-4138-bbe8-948bf47d3986 for instance with vm_state deleted and task_state None.
Oct 10 10:13:17 compute-1 nova_compute[235132]: 2025-10-10 10:13:17.928 2 DEBUG nova.compute.manager [req-5fca1ec3-44ba-4433-af09-9f669b6cce50 req-80c12755-d534-4f52-957a-79ddf9311545 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Received event network-vif-deleted-eb2cd434-444d-4138-bbe8-948bf47d3986 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:13:17 compute-1 nova_compute[235132]: 2025-10-10 10:13:17.929 2 DEBUG oslo_concurrency.processutils [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:13:17 compute-1 nova_compute[235132]: 2025-10-10 10:13:17.936 2 DEBUG nova.compute.provider_tree [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:13:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:13:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:17.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:13:17 compute-1 nova_compute[235132]: 2025-10-10 10:13:17.960 2 DEBUG nova.scheduler.client.report [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:13:17 compute-1 nova_compute[235132]: 2025-10-10 10:13:17.993 2 DEBUG oslo_concurrency.lockutils [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:13:18 compute-1 ceph-mon[79167]: pgmap v820: 353 pgs: 353 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 1023 B/s wr, 1 op/s
Oct 10 10:13:18 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2423340283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:13:18 compute-1 nova_compute[235132]: 2025-10-10 10:13:18.028 2 INFO nova.scheduler.client.report [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Deleted allocations for instance 2fe2b257-7e1f-46c2-aed9-0593c533e290
Oct 10 10:13:18 compute-1 nova_compute[235132]: 2025-10-10 10:13:18.100 2 DEBUG oslo_concurrency.lockutils [None req-527c885a-66a9-4e1e-b6a4-fc8aba468e49 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "2fe2b257-7e1f-46c2-aed9-0593c533e290" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:13:18 compute-1 sudo[240947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:13:18 compute-1 sudo[240947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:13:18 compute-1 sudo[240947]: pam_unix(sudo:session): session closed for user root
Oct 10 10:13:18 compute-1 sshd-session[240816]: Failed password for root from 193.46.255.159 port 57670 ssh2
Oct 10 10:13:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:18.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:18 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:19 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314001a50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:19 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:19 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320004cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:19.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:20 compute-1 ceph-mon[79167]: pgmap v821: 353 pgs: 353 active+clean; 41 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 7.9 KiB/s wr, 29 op/s
Oct 10 10:13:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:20 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:13:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:20.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:20 compute-1 unix_chkpwd[240973]: password check failed for user (root)
Oct 10 10:13:20 compute-1 nova_compute[235132]: 2025-10-10 10:13:20.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:20 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:20 compute-1 nova_compute[235132]: 2025-10-10 10:13:20.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:20 compute-1 nova_compute[235132]: 2025-10-10 10:13:20.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:21 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:21 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:21 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314001bf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:21.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:22 compute-1 sshd-session[240816]: Failed password for root from 193.46.255.159 port 57670 ssh2
Oct 10 10:13:22 compute-1 ceph-mon[79167]: pgmap v822: 353 pgs: 353 active+clean; 41 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 6.9 KiB/s wr, 29 op/s
Oct 10 10:13:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:13:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:22.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:13:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:13:22 compute-1 sshd-session[240816]: Received disconnect from 193.46.255.159 port 57670:11:  [preauth]
Oct 10 10:13:22 compute-1 sshd-session[240816]: Disconnected from authenticating user root 193.46.255.159 port 57670 [preauth]
Oct 10 10:13:22 compute-1 sshd-session[240816]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.159  user=root
Oct 10 10:13:22 compute-1 nova_compute[235132]: 2025-10-10 10:13:22.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:22 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:22 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320004cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:23 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:23 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:13:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:23 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:13:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:23 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:23 compute-1 podman[240978]: 2025-10-10 10:13:23.963439293 +0000 UTC m=+0.062216901 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:13:23 compute-1 podman[240977]: 2025-10-10 10:13:23.964531374 +0000 UTC m=+0.065183944 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 10 10:13:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:23.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:24 compute-1 ceph-mon[79167]: pgmap v823: 353 pgs: 353 active+clean; 41 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 7.4 KiB/s wr, 30 op/s
Oct 10 10:13:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:24.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:24 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:24 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53140025a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:25 compute-1 podman[241015]: 2025-10-10 10:13:25.016757582 +0000 UTC m=+0.119703664 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 10 10:13:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:25 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53140025a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:25 compute-1 nova_compute[235132]: 2025-10-10 10:13:25.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:25 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:25 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:25.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:26 compute-1 ceph-mon[79167]: pgmap v824: 353 pgs: 353 active+clean; 41 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 7.4 KiB/s wr, 29 op/s
Oct 10 10:13:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:26 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 10 10:13:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:26.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:26 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:27 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53140025a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/3183478379' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:13:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/3183478379' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:13:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:13:27 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:27 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53140025a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:27 compute-1 nova_compute[235132]: 2025-10-10 10:13:27.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:13:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:27.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:13:28 compute-1 ceph-mon[79167]: pgmap v825: 353 pgs: 353 active+clean; 41 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 7.4 KiB/s wr, 29 op/s
Oct 10 10:13:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:28.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:28 compute-1 sudo[241044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:13:28 compute-1 sudo[241044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:13:28 compute-1 sudo[241044]: pam_unix(sudo:session): session closed for user root
Oct 10 10:13:28 compute-1 sudo[241069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:13:28 compute-1 sudo[241069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:13:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:28 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:29 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:29 compute-1 sudo[241069]: pam_unix(sudo:session): session closed for user root
Oct 10 10:13:29 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:29 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320004cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:29.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:30 compute-1 ceph-mon[79167]: pgmap v826: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 7.8 KiB/s wr, 31 op/s
Oct 10 10:13:30 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:13:30 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:13:30 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:13:30 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:13:30 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:13:30 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:13:30 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:13:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:30.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:30 compute-1 nova_compute[235132]: 2025-10-10 10:13:30.698 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760091195.6956131, 2fe2b257-7e1f-46c2-aed9-0593c533e290 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:13:30 compute-1 nova_compute[235132]: 2025-10-10 10:13:30.698 2 INFO nova.compute.manager [-] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] VM Stopped (Lifecycle Event)
Oct 10 10:13:30 compute-1 nova_compute[235132]: 2025-10-10 10:13:30.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:30 compute-1 nova_compute[235132]: 2025-10-10 10:13:30.755 2 DEBUG nova.compute.manager [None req-51315287-710b-4a50-af0e-b000cddec616 - - - - - -] [instance: 2fe2b257-7e1f-46c2-aed9-0593c533e290] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:13:30 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:30 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f53140025a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:31 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:31 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:13:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:31.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:13:32 compute-1 ceph-mon[79167]: pgmap v827: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:13:32 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:13:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:32.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:13:32 compute-1 nova_compute[235132]: 2025-10-10 10:13:32.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:32 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:32 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320004cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:33 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_23] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/101333 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 10 10:13:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:33 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:33.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:34 compute-1 ceph-mon[79167]: pgmap v828: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 10 10:13:34 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:13:34 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:13:34 compute-1 sudo[241130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:13:34 compute-1 sudo[241130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:13:34 compute-1 sudo[241130]: pam_unix(sudo:session): session closed for user root
Oct 10 10:13:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:34.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:34 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:34 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328004630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:35 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:35 compute-1 nova_compute[235132]: 2025-10-10 10:13:35.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:35 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:35 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:35.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:36 compute-1 ceph-mon[79167]: pgmap v829: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:13:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:36.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:36 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:37 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328004630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:13:37 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:37 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:37 compute-1 nova_compute[235132]: 2025-10-10 10:13:37.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:37 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:37.855 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:13:37 compute-1 nova_compute[235132]: 2025-10-10 10:13:37.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:37 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:37.857 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 10 10:13:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:13:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:37.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:13:38 compute-1 ceph-mon[79167]: pgmap v830: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:13:38 compute-1 sudo[241157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:13:38 compute-1 sudo[241157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:13:38 compute-1 sudo[241157]: pam_unix(sudo:session): session closed for user root
Oct 10 10:13:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:13:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:38.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:13:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:38 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:39 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:39 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328004630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:39.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:40 compute-1 ceph-mon[79167]: pgmap v831: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 10 10:13:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:13:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:40.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:13:40 compute-1 nova_compute[235132]: 2025-10-10 10:13:40.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:40 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:40 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f531c001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:41 compute-1 podman[241183]: 2025-10-10 10:13:41.024806503 +0000 UTC m=+0.117047341 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:13:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:41 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c0046a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:41 compute-1 nova_compute[235132]: 2025-10-10 10:13:41.117 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "12298a8d-d383-47da-91e4-0a918e153f1d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:13:41 compute-1 nova_compute[235132]: 2025-10-10 10:13:41.118 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "12298a8d-d383-47da-91e4-0a918e153f1d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:13:41 compute-1 nova_compute[235132]: 2025-10-10 10:13:41.137 2 DEBUG nova.compute.manager [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 10 10:13:41 compute-1 nova_compute[235132]: 2025-10-10 10:13:41.235 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:13:41 compute-1 nova_compute[235132]: 2025-10-10 10:13:41.236 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:13:41 compute-1 nova_compute[235132]: 2025-10-10 10:13:41.249 2 DEBUG nova.virt.hardware [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 10 10:13:41 compute-1 nova_compute[235132]: 2025-10-10 10:13:41.249 2 INFO nova.compute.claims [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Claim successful on node compute-1.ctlplane.example.com
Oct 10 10:13:41 compute-1 nova_compute[235132]: 2025-10-10 10:13:41.396 2 DEBUG oslo_concurrency.processutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:13:41 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:13:41 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1897484170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:13:41 compute-1 nova_compute[235132]: 2025-10-10 10:13:41.840 2 DEBUG oslo_concurrency.processutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:13:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:41 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:41 compute-1 nova_compute[235132]: 2025-10-10 10:13:41.849 2 DEBUG nova.compute.provider_tree [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:13:41 compute-1 nova_compute[235132]: 2025-10-10 10:13:41.920 2 DEBUG nova.scheduler.client.report [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:13:41 compute-1 nova_compute[235132]: 2025-10-10 10:13:41.950 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:13:41 compute-1 nova_compute[235132]: 2025-10-10 10:13:41.952 2 DEBUG nova.compute.manager [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 10 10:13:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:41.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:42 compute-1 nova_compute[235132]: 2025-10-10 10:13:42.010 2 DEBUG nova.compute.manager [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 10 10:13:42 compute-1 nova_compute[235132]: 2025-10-10 10:13:42.011 2 DEBUG nova.network.neutron [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 10 10:13:42 compute-1 nova_compute[235132]: 2025-10-10 10:13:42.045 2 INFO nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 10 10:13:42 compute-1 nova_compute[235132]: 2025-10-10 10:13:42.075 2 DEBUG nova.compute.manager [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 10 10:13:42 compute-1 nova_compute[235132]: 2025-10-10 10:13:42.193 2 DEBUG nova.compute.manager [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 10 10:13:42 compute-1 nova_compute[235132]: 2025-10-10 10:13:42.195 2 DEBUG nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 10 10:13:42 compute-1 nova_compute[235132]: 2025-10-10 10:13:42.196 2 INFO nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Creating image(s)
Oct 10 10:13:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:42.207 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:13:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:42.207 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:13:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:42.208 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:13:42 compute-1 nova_compute[235132]: 2025-10-10 10:13:42.237 2 DEBUG nova.storage.rbd_utils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 12298a8d-d383-47da-91e4-0a918e153f1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:13:42 compute-1 ceph-mon[79167]: pgmap v832: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 10 10:13:42 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1897484170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:13:42 compute-1 nova_compute[235132]: 2025-10-10 10:13:42.284 2 DEBUG nova.storage.rbd_utils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 12298a8d-d383-47da-91e4-0a918e153f1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:13:42 compute-1 nova_compute[235132]: 2025-10-10 10:13:42.327 2 DEBUG nova.storage.rbd_utils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 12298a8d-d383-47da-91e4-0a918e153f1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:13:42 compute-1 nova_compute[235132]: 2025-10-10 10:13:42.334 2 DEBUG oslo_concurrency.processutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:13:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:42.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:42 compute-1 nova_compute[235132]: 2025-10-10 10:13:42.418 2 DEBUG oslo_concurrency.processutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:13:42 compute-1 nova_compute[235132]: 2025-10-10 10:13:42.420 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:13:42 compute-1 nova_compute[235132]: 2025-10-10 10:13:42.422 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:13:42 compute-1 nova_compute[235132]: 2025-10-10 10:13:42.422 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:13:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:13:42 compute-1 nova_compute[235132]: 2025-10-10 10:13:42.471 2 DEBUG nova.storage.rbd_utils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 12298a8d-d383-47da-91e4-0a918e153f1d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:13:42 compute-1 nova_compute[235132]: 2025-10-10 10:13:42.478 2 DEBUG oslo_concurrency.processutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 12298a8d-d383-47da-91e4-0a918e153f1d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:13:42 compute-1 nova_compute[235132]: 2025-10-10 10:13:42.796 2 DEBUG nova.policy [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7956778c03764aaf8906c9b435337976', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 10 10:13:42 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:42 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:42 compute-1 nova_compute[235132]: 2025-10-10 10:13:42.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:42 compute-1 nova_compute[235132]: 2025-10-10 10:13:42.924 2 DEBUG oslo_concurrency.processutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 12298a8d-d383-47da-91e4-0a918e153f1d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:13:43 compute-1 nova_compute[235132]: 2025-10-10 10:13:43.006 2 DEBUG nova.storage.rbd_utils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] resizing rbd image 12298a8d-d383-47da-91e4-0a918e153f1d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 10 10:13:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:43 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:43 compute-1 nova_compute[235132]: 2025-10-10 10:13:43.162 2 DEBUG nova.objects.instance [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'migration_context' on Instance uuid 12298a8d-d383-47da-91e4-0a918e153f1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:13:43 compute-1 nova_compute[235132]: 2025-10-10 10:13:43.411 2 DEBUG nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 10 10:13:43 compute-1 nova_compute[235132]: 2025-10-10 10:13:43.411 2 DEBUG nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Ensure instance console log exists: /var/lib/nova/instances/12298a8d-d383-47da-91e4-0a918e153f1d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 10 10:13:43 compute-1 nova_compute[235132]: 2025-10-10 10:13:43.412 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:13:43 compute-1 nova_compute[235132]: 2025-10-10 10:13:43.412 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:13:43 compute-1 nova_compute[235132]: 2025-10-10 10:13:43.413 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:13:43 compute-1 nova_compute[235132]: 2025-10-10 10:13:43.834 2 DEBUG nova.network.neutron [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Successfully created port: 446b0e59-d2be-42d8-801f-7ba63ba76e66 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 10 10:13:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:43 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c0046c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/101343 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:13:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:43.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:44 compute-1 ceph-mon[79167]: pgmap v833: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:13:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:44.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:44.861 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ee0899c1-415d-4aa8-abe8-1240b4e8bf2c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:13:44 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:44 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328004630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:45 compute-1 nova_compute[235132]: 2025-10-10 10:13:45.036 2 DEBUG nova.network.neutron [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Successfully updated port: 446b0e59-d2be-42d8-801f-7ba63ba76e66 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 10 10:13:45 compute-1 nova_compute[235132]: 2025-10-10 10:13:45.064 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "refresh_cache-12298a8d-d383-47da-91e4-0a918e153f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:13:45 compute-1 nova_compute[235132]: 2025-10-10 10:13:45.065 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquired lock "refresh_cache-12298a8d-d383-47da-91e4-0a918e153f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:13:45 compute-1 nova_compute[235132]: 2025-10-10 10:13:45.065 2 DEBUG nova.network.neutron [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 10 10:13:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:45 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340001b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:45 compute-1 nova_compute[235132]: 2025-10-10 10:13:45.161 2 DEBUG nova.compute.manager [req-6d53b4c6-f79e-4913-a3d7-04843d6cb074 req-41370972-3f2f-4e81-a8a5-365aae2fedf4 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Received event network-changed-446b0e59-d2be-42d8-801f-7ba63ba76e66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:13:45 compute-1 nova_compute[235132]: 2025-10-10 10:13:45.162 2 DEBUG nova.compute.manager [req-6d53b4c6-f79e-4913-a3d7-04843d6cb074 req-41370972-3f2f-4e81-a8a5-365aae2fedf4 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Refreshing instance network info cache due to event network-changed-446b0e59-d2be-42d8-801f-7ba63ba76e66. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 10 10:13:45 compute-1 nova_compute[235132]: 2025-10-10 10:13:45.162 2 DEBUG oslo_concurrency.lockutils [req-6d53b4c6-f79e-4913-a3d7-04843d6cb074 req-41370972-3f2f-4e81-a8a5-365aae2fedf4 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-12298a8d-d383-47da-91e4-0a918e153f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:13:45 compute-1 nova_compute[235132]: 2025-10-10 10:13:45.315 2 DEBUG nova.network.neutron [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 10 10:13:45 compute-1 ceph-mon[79167]: pgmap v834: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:13:45 compute-1 nova_compute[235132]: 2025-10-10 10:13:45.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:45 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:45.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:46 compute-1 nova_compute[235132]: 2025-10-10 10:13:46.127 2 DEBUG nova.network.neutron [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Updating instance_info_cache with network_info: [{"id": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "address": "fa:16:3e:9d:f6:71", "network": {"id": "c8850c4c-dc38-4440-9c03-f2dd59684fe6", "bridge": "br-int", "label": "tempest-network-smoke--781461532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446b0e59-d2", "ovs_interfaceid": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:13:46 compute-1 nova_compute[235132]: 2025-10-10 10:13:46.158 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Releasing lock "refresh_cache-12298a8d-d383-47da-91e4-0a918e153f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:13:46 compute-1 nova_compute[235132]: 2025-10-10 10:13:46.159 2 DEBUG nova.compute.manager [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Instance network_info: |[{"id": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "address": "fa:16:3e:9d:f6:71", "network": {"id": "c8850c4c-dc38-4440-9c03-f2dd59684fe6", "bridge": "br-int", "label": "tempest-network-smoke--781461532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446b0e59-d2", "ovs_interfaceid": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 10 10:13:46 compute-1 nova_compute[235132]: 2025-10-10 10:13:46.160 2 DEBUG oslo_concurrency.lockutils [req-6d53b4c6-f79e-4913-a3d7-04843d6cb074 req-41370972-3f2f-4e81-a8a5-365aae2fedf4 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-12298a8d-d383-47da-91e4-0a918e153f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:13:46 compute-1 nova_compute[235132]: 2025-10-10 10:13:46.160 2 DEBUG nova.network.neutron [req-6d53b4c6-f79e-4913-a3d7-04843d6cb074 req-41370972-3f2f-4e81-a8a5-365aae2fedf4 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Refreshing network info cache for port 446b0e59-d2be-42d8-801f-7ba63ba76e66 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 10 10:13:46 compute-1 nova_compute[235132]: 2025-10-10 10:13:46.168 2 DEBUG nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Start _get_guest_xml network_info=[{"id": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "address": "fa:16:3e:9d:f6:71", "network": {"id": "c8850c4c-dc38-4440-9c03-f2dd59684fe6", "bridge": "br-int", "label": "tempest-network-smoke--781461532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446b0e59-d2", "ovs_interfaceid": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-10T10:09:50Z,direct_url=<?>,disk_format='qcow2',id=5ae78700-970d-45b4-a57d-978a054c7519,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ec962e275689437d80680ff3ea69c852',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-10T10:09:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'boot_index': 0, 'guest_format': None, 'image_id': '5ae78700-970d-45b4-a57d-978a054c7519'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 10 10:13:46 compute-1 nova_compute[235132]: 2025-10-10 10:13:46.174 2 WARNING nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:13:46 compute-1 nova_compute[235132]: 2025-10-10 10:13:46.184 2 DEBUG nova.virt.libvirt.host [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 10 10:13:46 compute-1 nova_compute[235132]: 2025-10-10 10:13:46.185 2 DEBUG nova.virt.libvirt.host [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 10 10:13:46 compute-1 nova_compute[235132]: 2025-10-10 10:13:46.189 2 DEBUG nova.virt.libvirt.host [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 10 10:13:46 compute-1 nova_compute[235132]: 2025-10-10 10:13:46.190 2 DEBUG nova.virt.libvirt.host [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 10 10:13:46 compute-1 nova_compute[235132]: 2025-10-10 10:13:46.191 2 DEBUG nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 10 10:13:46 compute-1 nova_compute[235132]: 2025-10-10 10:13:46.191 2 DEBUG nova.virt.hardware [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-10T10:09:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='00373e71-6208-4238-ad85-db0452c53bc6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-10T10:09:50Z,direct_url=<?>,disk_format='qcow2',id=5ae78700-970d-45b4-a57d-978a054c7519,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ec962e275689437d80680ff3ea69c852',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-10T10:09:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 10 10:13:46 compute-1 nova_compute[235132]: 2025-10-10 10:13:46.192 2 DEBUG nova.virt.hardware [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 10 10:13:46 compute-1 nova_compute[235132]: 2025-10-10 10:13:46.192 2 DEBUG nova.virt.hardware [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 10 10:13:46 compute-1 nova_compute[235132]: 2025-10-10 10:13:46.193 2 DEBUG nova.virt.hardware [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 10 10:13:46 compute-1 nova_compute[235132]: 2025-10-10 10:13:46.193 2 DEBUG nova.virt.hardware [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 10 10:13:46 compute-1 nova_compute[235132]: 2025-10-10 10:13:46.193 2 DEBUG nova.virt.hardware [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 10 10:13:46 compute-1 nova_compute[235132]: 2025-10-10 10:13:46.194 2 DEBUG nova.virt.hardware [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 10 10:13:46 compute-1 nova_compute[235132]: 2025-10-10 10:13:46.194 2 DEBUG nova.virt.hardware [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 10 10:13:46 compute-1 nova_compute[235132]: 2025-10-10 10:13:46.195 2 DEBUG nova.virt.hardware [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 10 10:13:46 compute-1 nova_compute[235132]: 2025-10-10 10:13:46.195 2 DEBUG nova.virt.hardware [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 10 10:13:46 compute-1 nova_compute[235132]: 2025-10-10 10:13:46.196 2 DEBUG nova.virt.hardware [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 10 10:13:46 compute-1 nova_compute[235132]: 2025-10-10 10:13:46.200 2 DEBUG oslo_concurrency.processutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:13:46 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:13:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:13:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:46.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:13:46 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 10 10:13:46 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1301345308' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:13:46 compute-1 nova_compute[235132]: 2025-10-10 10:13:46.724 2 DEBUG oslo_concurrency.processutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:13:46 compute-1 nova_compute[235132]: 2025-10-10 10:13:46.764 2 DEBUG nova.storage.rbd_utils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 12298a8d-d383-47da-91e4-0a918e153f1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:13:46 compute-1 nova_compute[235132]: 2025-10-10 10:13:46.769 2 DEBUG oslo_concurrency.processutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:13:46 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:46 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c0046c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:47 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328004630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 10 10:13:47 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2845144114' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:13:47 compute-1 nova_compute[235132]: 2025-10-10 10:13:47.205 2 DEBUG oslo_concurrency.processutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:13:47 compute-1 nova_compute[235132]: 2025-10-10 10:13:47.207 2 DEBUG nova.virt.libvirt.vif [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-10T10:13:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-742591551',display_name='tempest-TestNetworkBasicOps-server-742591551',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-742591551',id=4,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDCoryMMDZ6cZj1EAzGK4muKCZLgNsQyPcigwS48pCfmWHQQLrGNGrCkXZ7qqZSzWLyfX4m7fzgUMEko2IR4dU9srCI10SLqm/ZSwQK7hB66f+rf62WEii+W4TMQEFu9vA==',key_name='tempest-TestNetworkBasicOps-766718028',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-svhla3ss',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-10T10:13:42Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=12298a8d-d383-47da-91e4-0a918e153f1d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "address": "fa:16:3e:9d:f6:71", "network": {"id": "c8850c4c-dc38-4440-9c03-f2dd59684fe6", "bridge": "br-int", "label": "tempest-network-smoke--781461532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446b0e59-d2", "ovs_interfaceid": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 10 10:13:47 compute-1 nova_compute[235132]: 2025-10-10 10:13:47.207 2 DEBUG nova.network.os_vif_util [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "address": "fa:16:3e:9d:f6:71", "network": {"id": "c8850c4c-dc38-4440-9c03-f2dd59684fe6", "bridge": "br-int", "label": "tempest-network-smoke--781461532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446b0e59-d2", "ovs_interfaceid": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:13:47 compute-1 nova_compute[235132]: 2025-10-10 10:13:47.208 2 DEBUG nova.network.os_vif_util [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:f6:71,bridge_name='br-int',has_traffic_filtering=True,id=446b0e59-d2be-42d8-801f-7ba63ba76e66,network=Network(c8850c4c-dc38-4440-9c03-f2dd59684fe6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap446b0e59-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:13:47 compute-1 nova_compute[235132]: 2025-10-10 10:13:47.210 2 DEBUG nova.objects.instance [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 12298a8d-d383-47da-91e4-0a918e153f1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:13:47 compute-1 nova_compute[235132]: 2025-10-10 10:13:47.225 2 DEBUG nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] End _get_guest_xml xml=<domain type="kvm">
Oct 10 10:13:47 compute-1 nova_compute[235132]:   <uuid>12298a8d-d383-47da-91e4-0a918e153f1d</uuid>
Oct 10 10:13:47 compute-1 nova_compute[235132]:   <name>instance-00000004</name>
Oct 10 10:13:47 compute-1 nova_compute[235132]:   <memory>131072</memory>
Oct 10 10:13:47 compute-1 nova_compute[235132]:   <vcpu>1</vcpu>
Oct 10 10:13:47 compute-1 nova_compute[235132]:   <metadata>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 10:13:47 compute-1 nova_compute[235132]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:       <nova:name>tempest-TestNetworkBasicOps-server-742591551</nova:name>
Oct 10 10:13:47 compute-1 nova_compute[235132]:       <nova:creationTime>2025-10-10 10:13:46</nova:creationTime>
Oct 10 10:13:47 compute-1 nova_compute[235132]:       <nova:flavor name="m1.nano">
Oct 10 10:13:47 compute-1 nova_compute[235132]:         <nova:memory>128</nova:memory>
Oct 10 10:13:47 compute-1 nova_compute[235132]:         <nova:disk>1</nova:disk>
Oct 10 10:13:47 compute-1 nova_compute[235132]:         <nova:swap>0</nova:swap>
Oct 10 10:13:47 compute-1 nova_compute[235132]:         <nova:ephemeral>0</nova:ephemeral>
Oct 10 10:13:47 compute-1 nova_compute[235132]:         <nova:vcpus>1</nova:vcpus>
Oct 10 10:13:47 compute-1 nova_compute[235132]:       </nova:flavor>
Oct 10 10:13:47 compute-1 nova_compute[235132]:       <nova:owner>
Oct 10 10:13:47 compute-1 nova_compute[235132]:         <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 10:13:47 compute-1 nova_compute[235132]:         <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 10:13:47 compute-1 nova_compute[235132]:       </nova:owner>
Oct 10 10:13:47 compute-1 nova_compute[235132]:       <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:       <nova:ports>
Oct 10 10:13:47 compute-1 nova_compute[235132]:         <nova:port uuid="446b0e59-d2be-42d8-801f-7ba63ba76e66">
Oct 10 10:13:47 compute-1 nova_compute[235132]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:         </nova:port>
Oct 10 10:13:47 compute-1 nova_compute[235132]:       </nova:ports>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     </nova:instance>
Oct 10 10:13:47 compute-1 nova_compute[235132]:   </metadata>
Oct 10 10:13:47 compute-1 nova_compute[235132]:   <sysinfo type="smbios">
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <system>
Oct 10 10:13:47 compute-1 nova_compute[235132]:       <entry name="manufacturer">RDO</entry>
Oct 10 10:13:47 compute-1 nova_compute[235132]:       <entry name="product">OpenStack Compute</entry>
Oct 10 10:13:47 compute-1 nova_compute[235132]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 10:13:47 compute-1 nova_compute[235132]:       <entry name="serial">12298a8d-d383-47da-91e4-0a918e153f1d</entry>
Oct 10 10:13:47 compute-1 nova_compute[235132]:       <entry name="uuid">12298a8d-d383-47da-91e4-0a918e153f1d</entry>
Oct 10 10:13:47 compute-1 nova_compute[235132]:       <entry name="family">Virtual Machine</entry>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     </system>
Oct 10 10:13:47 compute-1 nova_compute[235132]:   </sysinfo>
Oct 10 10:13:47 compute-1 nova_compute[235132]:   <os>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <boot dev="hd"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <smbios mode="sysinfo"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:   </os>
Oct 10 10:13:47 compute-1 nova_compute[235132]:   <features>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <acpi/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <apic/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <vmcoreinfo/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:   </features>
Oct 10 10:13:47 compute-1 nova_compute[235132]:   <clock offset="utc">
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <timer name="pit" tickpolicy="delay"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <timer name="hpet" present="no"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:   </clock>
Oct 10 10:13:47 compute-1 nova_compute[235132]:   <cpu mode="host-model" match="exact">
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <topology sockets="1" cores="1" threads="1"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:   </cpu>
Oct 10 10:13:47 compute-1 nova_compute[235132]:   <devices>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <disk type="network" device="disk">
Oct 10 10:13:47 compute-1 nova_compute[235132]:       <driver type="raw" cache="none"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:       <source protocol="rbd" name="vms/12298a8d-d383-47da-91e4-0a918e153f1d_disk">
Oct 10 10:13:47 compute-1 nova_compute[235132]:         <host name="192.168.122.100" port="6789"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:         <host name="192.168.122.102" port="6789"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:         <host name="192.168.122.101" port="6789"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:       </source>
Oct 10 10:13:47 compute-1 nova_compute[235132]:       <auth username="openstack">
Oct 10 10:13:47 compute-1 nova_compute[235132]:         <secret type="ceph" uuid="21f084a3-af34-5230-afe4-ea5cd24a55f4"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:       </auth>
Oct 10 10:13:47 compute-1 nova_compute[235132]:       <target dev="vda" bus="virtio"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     </disk>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <disk type="network" device="cdrom">
Oct 10 10:13:47 compute-1 nova_compute[235132]:       <driver type="raw" cache="none"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:       <source protocol="rbd" name="vms/12298a8d-d383-47da-91e4-0a918e153f1d_disk.config">
Oct 10 10:13:47 compute-1 nova_compute[235132]:         <host name="192.168.122.100" port="6789"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:         <host name="192.168.122.102" port="6789"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:         <host name="192.168.122.101" port="6789"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:       </source>
Oct 10 10:13:47 compute-1 nova_compute[235132]:       <auth username="openstack">
Oct 10 10:13:47 compute-1 nova_compute[235132]:         <secret type="ceph" uuid="21f084a3-af34-5230-afe4-ea5cd24a55f4"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:       </auth>
Oct 10 10:13:47 compute-1 nova_compute[235132]:       <target dev="sda" bus="sata"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     </disk>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <interface type="ethernet">
Oct 10 10:13:47 compute-1 nova_compute[235132]:       <mac address="fa:16:3e:9d:f6:71"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:       <model type="virtio"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:       <driver name="vhost" rx_queue_size="512"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:       <mtu size="1442"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:       <target dev="tap446b0e59-d2"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     </interface>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <serial type="pty">
Oct 10 10:13:47 compute-1 nova_compute[235132]:       <log file="/var/lib/nova/instances/12298a8d-d383-47da-91e4-0a918e153f1d/console.log" append="off"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     </serial>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <video>
Oct 10 10:13:47 compute-1 nova_compute[235132]:       <model type="virtio"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     </video>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <input type="tablet" bus="usb"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <rng model="virtio">
Oct 10 10:13:47 compute-1 nova_compute[235132]:       <backend model="random">/dev/urandom</backend>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     </rng>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <controller type="usb" index="0"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     <memballoon model="virtio">
Oct 10 10:13:47 compute-1 nova_compute[235132]:       <stats period="10"/>
Oct 10 10:13:47 compute-1 nova_compute[235132]:     </memballoon>
Oct 10 10:13:47 compute-1 nova_compute[235132]:   </devices>
Oct 10 10:13:47 compute-1 nova_compute[235132]: </domain>
Oct 10 10:13:47 compute-1 nova_compute[235132]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 10 10:13:47 compute-1 nova_compute[235132]: 2025-10-10 10:13:47.227 2 DEBUG nova.compute.manager [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Preparing to wait for external event network-vif-plugged-446b0e59-d2be-42d8-801f-7ba63ba76e66 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 10 10:13:47 compute-1 nova_compute[235132]: 2025-10-10 10:13:47.227 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:13:47 compute-1 nova_compute[235132]: 2025-10-10 10:13:47.228 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:13:47 compute-1 nova_compute[235132]: 2025-10-10 10:13:47.228 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:13:47 compute-1 nova_compute[235132]: 2025-10-10 10:13:47.229 2 DEBUG nova.virt.libvirt.vif [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-10T10:13:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-742591551',display_name='tempest-TestNetworkBasicOps-server-742591551',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-742591551',id=4,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDCoryMMDZ6cZj1EAzGK4muKCZLgNsQyPcigwS48pCfmWHQQLrGNGrCkXZ7qqZSzWLyfX4m7fzgUMEko2IR4dU9srCI10SLqm/ZSwQK7hB66f+rf62WEii+W4TMQEFu9vA==',key_name='tempest-TestNetworkBasicOps-766718028',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-svhla3ss',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-10T10:13:42Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=12298a8d-d383-47da-91e4-0a918e153f1d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "address": "fa:16:3e:9d:f6:71", "network": {"id": "c8850c4c-dc38-4440-9c03-f2dd59684fe6", "bridge": "br-int", "label": "tempest-network-smoke--781461532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446b0e59-d2", "ovs_interfaceid": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 10 10:13:47 compute-1 nova_compute[235132]: 2025-10-10 10:13:47.229 2 DEBUG nova.network.os_vif_util [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "address": "fa:16:3e:9d:f6:71", "network": {"id": "c8850c4c-dc38-4440-9c03-f2dd59684fe6", "bridge": "br-int", "label": "tempest-network-smoke--781461532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446b0e59-d2", "ovs_interfaceid": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:13:47 compute-1 nova_compute[235132]: 2025-10-10 10:13:47.230 2 DEBUG nova.network.os_vif_util [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:f6:71,bridge_name='br-int',has_traffic_filtering=True,id=446b0e59-d2be-42d8-801f-7ba63ba76e66,network=Network(c8850c4c-dc38-4440-9c03-f2dd59684fe6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap446b0e59-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:13:47 compute-1 nova_compute[235132]: 2025-10-10 10:13:47.230 2 DEBUG os_vif [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:f6:71,bridge_name='br-int',has_traffic_filtering=True,id=446b0e59-d2be-42d8-801f-7ba63ba76e66,network=Network(c8850c4c-dc38-4440-9c03-f2dd59684fe6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap446b0e59-d2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 10 10:13:47 compute-1 nova_compute[235132]: 2025-10-10 10:13:47.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:47 compute-1 nova_compute[235132]: 2025-10-10 10:13:47.231 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:13:47 compute-1 nova_compute[235132]: 2025-10-10 10:13:47.232 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 10 10:13:47 compute-1 nova_compute[235132]: 2025-10-10 10:13:47.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:47 compute-1 nova_compute[235132]: 2025-10-10 10:13:47.236 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap446b0e59-d2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:13:47 compute-1 nova_compute[235132]: 2025-10-10 10:13:47.237 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap446b0e59-d2, col_values=(('external_ids', {'iface-id': '446b0e59-d2be-42d8-801f-7ba63ba76e66', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:f6:71', 'vm-uuid': '12298a8d-d383-47da-91e4-0a918e153f1d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:13:47 compute-1 nova_compute[235132]: 2025-10-10 10:13:47.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:47 compute-1 NetworkManager[44982]: <info>  [1760091227.2397] manager: (tap446b0e59-d2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Oct 10 10:13:47 compute-1 nova_compute[235132]: 2025-10-10 10:13:47.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 10 10:13:47 compute-1 nova_compute[235132]: 2025-10-10 10:13:47.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:47 compute-1 nova_compute[235132]: 2025-10-10 10:13:47.249 2 INFO os_vif [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:f6:71,bridge_name='br-int',has_traffic_filtering=True,id=446b0e59-d2be-42d8-801f-7ba63ba76e66,network=Network(c8850c4c-dc38-4440-9c03-f2dd59684fe6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap446b0e59-d2')
Oct 10 10:13:47 compute-1 nova_compute[235132]: 2025-10-10 10:13:47.324 2 DEBUG nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 10 10:13:47 compute-1 nova_compute[235132]: 2025-10-10 10:13:47.325 2 DEBUG nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 10 10:13:47 compute-1 nova_compute[235132]: 2025-10-10 10:13:47.326 2 DEBUG nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No VIF found with MAC fa:16:3e:9d:f6:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 10 10:13:47 compute-1 nova_compute[235132]: 2025-10-10 10:13:47.326 2 INFO nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Using config drive
Oct 10 10:13:47 compute-1 nova_compute[235132]: 2025-10-10 10:13:47.364 2 DEBUG nova.storage.rbd_utils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 12298a8d-d383-47da-91e4-0a918e153f1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:13:47 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1301345308' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:13:47 compute-1 ceph-mon[79167]: pgmap v835: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Oct 10 10:13:47 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2845144114' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:13:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:13:47 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:47 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340001b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:47 compute-1 nova_compute[235132]: 2025-10-10 10:13:47.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:47.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:48 compute-1 nova_compute[235132]: 2025-10-10 10:13:48.001 2 INFO nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Creating config drive at /var/lib/nova/instances/12298a8d-d383-47da-91e4-0a918e153f1d/disk.config
Oct 10 10:13:48 compute-1 nova_compute[235132]: 2025-10-10 10:13:48.007 2 DEBUG oslo_concurrency.processutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/12298a8d-d383-47da-91e4-0a918e153f1d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdonamwlu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:13:48 compute-1 nova_compute[235132]: 2025-10-10 10:13:48.136 2 DEBUG oslo_concurrency.processutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/12298a8d-d383-47da-91e4-0a918e153f1d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdonamwlu" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:13:48 compute-1 nova_compute[235132]: 2025-10-10 10:13:48.171 2 DEBUG nova.storage.rbd_utils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 12298a8d-d383-47da-91e4-0a918e153f1d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:13:48 compute-1 nova_compute[235132]: 2025-10-10 10:13:48.177 2 DEBUG oslo_concurrency.processutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/12298a8d-d383-47da-91e4-0a918e153f1d/disk.config 12298a8d-d383-47da-91e4-0a918e153f1d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:13:48 compute-1 nova_compute[235132]: 2025-10-10 10:13:48.199 2 DEBUG nova.network.neutron [req-6d53b4c6-f79e-4913-a3d7-04843d6cb074 req-41370972-3f2f-4e81-a8a5-365aae2fedf4 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Updated VIF entry in instance network info cache for port 446b0e59-d2be-42d8-801f-7ba63ba76e66. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 10 10:13:48 compute-1 nova_compute[235132]: 2025-10-10 10:13:48.200 2 DEBUG nova.network.neutron [req-6d53b4c6-f79e-4913-a3d7-04843d6cb074 req-41370972-3f2f-4e81-a8a5-365aae2fedf4 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Updating instance_info_cache with network_info: [{"id": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "address": "fa:16:3e:9d:f6:71", "network": {"id": "c8850c4c-dc38-4440-9c03-f2dd59684fe6", "bridge": "br-int", "label": "tempest-network-smoke--781461532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446b0e59-d2", "ovs_interfaceid": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:13:48 compute-1 nova_compute[235132]: 2025-10-10 10:13:48.226 2 DEBUG oslo_concurrency.lockutils [req-6d53b4c6-f79e-4913-a3d7-04843d6cb074 req-41370972-3f2f-4e81-a8a5-365aae2fedf4 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-12298a8d-d383-47da-91e4-0a918e153f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:13:48 compute-1 nova_compute[235132]: 2025-10-10 10:13:48.347 2 DEBUG oslo_concurrency.processutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/12298a8d-d383-47da-91e4-0a918e153f1d/disk.config 12298a8d-d383-47da-91e4-0a918e153f1d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:13:48 compute-1 nova_compute[235132]: 2025-10-10 10:13:48.348 2 INFO nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Deleting local config drive /var/lib/nova/instances/12298a8d-d383-47da-91e4-0a918e153f1d/disk.config because it was imported into RBD.
Oct 10 10:13:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:13:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:48.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:13:48 compute-1 kernel: tap446b0e59-d2: entered promiscuous mode
Oct 10 10:13:48 compute-1 NetworkManager[44982]: <info>  [1760091228.4291] manager: (tap446b0e59-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Oct 10 10:13:48 compute-1 ovn_controller[131749]: 2025-10-10T10:13:48Z|00057|binding|INFO|Claiming lport 446b0e59-d2be-42d8-801f-7ba63ba76e66 for this chassis.
Oct 10 10:13:48 compute-1 ovn_controller[131749]: 2025-10-10T10:13:48Z|00058|binding|INFO|446b0e59-d2be-42d8-801f-7ba63ba76e66: Claiming fa:16:3e:9d:f6:71 10.100.0.14
Oct 10 10:13:48 compute-1 nova_compute[235132]: 2025-10-10 10:13:48.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:48 compute-1 nova_compute[235132]: 2025-10-10 10:13:48.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:48 compute-1 nova_compute[235132]: 2025-10-10 10:13:48.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:48 compute-1 systemd-machined[191637]: New machine qemu-3-instance-00000004.
Oct 10 10:13:48 compute-1 systemd-udevd[241532]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 10:13:48 compute-1 NetworkManager[44982]: <info>  [1760091228.4966] device (tap446b0e59-d2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 10:13:48 compute-1 NetworkManager[44982]: <info>  [1760091228.4976] device (tap446b0e59-d2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 10:13:48 compute-1 systemd[1]: Started Virtual Machine qemu-3-instance-00000004.
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.530 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:f6:71 10.100.0.14'], port_security=['fa:16:3e:9d:f6:71 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '12298a8d-d383-47da-91e4-0a918e153f1d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c8850c4c-dc38-4440-9c03-f2dd59684fe6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e222deba-0df5-4a21-bff7-930fc17b2ea1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21e2152f-e965-46e3-9774-988f8fdf189b, chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], logical_port=446b0e59-d2be-42d8-801f-7ba63ba76e66) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.531 141156 INFO neutron.agent.ovn.metadata.agent [-] Port 446b0e59-d2be-42d8-801f-7ba63ba76e66 in datapath c8850c4c-dc38-4440-9c03-f2dd59684fe6 bound to our chassis
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.532 141156 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c8850c4c-dc38-4440-9c03-f2dd59684fe6
Oct 10 10:13:48 compute-1 ovn_controller[131749]: 2025-10-10T10:13:48Z|00059|binding|INFO|Setting lport 446b0e59-d2be-42d8-801f-7ba63ba76e66 ovn-installed in OVS
Oct 10 10:13:48 compute-1 ovn_controller[131749]: 2025-10-10T10:13:48Z|00060|binding|INFO|Setting lport 446b0e59-d2be-42d8-801f-7ba63ba76e66 up in Southbound
Oct 10 10:13:48 compute-1 nova_compute[235132]: 2025-10-10 10:13:48.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.552 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[cb86f869-7086-4dbe-9e6e-c2b89d22727f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.553 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc8850c4c-d1 in ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.555 238898 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc8850c4c-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.556 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[06cafb6a-8361-4ca7-9cb8-7a868cf23506]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.556 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[d1b16485-6585-4e95-be88-8d67e25701f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.571 141275 DEBUG oslo.privsep.daemon [-] privsep: reply[69fd44c9-13fd-4024-a84d-52ada81088bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.596 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[5c7f660a-b237-4c31-91ff-43a1dce7338d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.629 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[c91784a9-8261-475d-a48b-d47f8533ecdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.636 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[17c3dc1e-1665-4d80-bedd-c9b231bd79ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:48 compute-1 NetworkManager[44982]: <info>  [1760091228.6371] manager: (tapc8850c4c-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.686 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[e3ae38ea-44b2-4621-b96e-977a58cf3285]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.690 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[f222751c-cfc7-4ac6-a33b-bd9ce087b6cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:48 compute-1 NetworkManager[44982]: <info>  [1760091228.7145] device (tapc8850c4c-d0): carrier: link connected
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.721 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[575988a0-5646-4639-9a0c-d7762df3729e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.742 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[cf800ab3-ec1e-4b54-be5a-a6c260535f90]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc8850c4c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:14:44'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414242, 'reachable_time': 20031, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 241565, 'error': None, 'target': 'ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.762 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[7bc25817-26a9-4df1-b1c3-763e3484033a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2c:1444'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 414242, 'tstamp': 414242}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241567, 'error': None, 'target': 'ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.783 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[6affae5b-9f8b-403e-a091-059d932cacab]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc8850c4c-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2c:14:44'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414242, 'reachable_time': 20031, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 241568, 'error': None, 'target': 'ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.812 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[160e8f30-e392-418f-948e-aa3a6d285bea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.878 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[2aa860dd-d5d6-4fcc-857c-8900a4a9d64b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.880 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8850c4c-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.880 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.881 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc8850c4c-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:13:48 compute-1 NetworkManager[44982]: <info>  [1760091228.8839] manager: (tapc8850c4c-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Oct 10 10:13:48 compute-1 kernel: tapc8850c4c-d0: entered promiscuous mode
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.887 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc8850c4c-d0, col_values=(('external_ids', {'iface-id': '185907ee-d118-486d-93ad-c5a1b6a3a149'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:13:48 compute-1 nova_compute[235132]: 2025-10-10 10:13:48.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:48 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:48 compute-1 ovn_controller[131749]: 2025-10-10T10:13:48Z|00061|binding|INFO|Releasing lport 185907ee-d118-486d-93ad-c5a1b6a3a149 from this chassis (sb_readonly=0)
Oct 10 10:13:48 compute-1 nova_compute[235132]: 2025-10-10 10:13:48.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:48 compute-1 nova_compute[235132]: 2025-10-10 10:13:48.895 2 DEBUG nova.compute.manager [req-e5e0429b-4860-455b-94ce-8a0645a1eba5 req-9b7f72ab-c27e-47a8-840d-6710dcf9a769 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Received event network-vif-plugged-446b0e59-d2be-42d8-801f-7ba63ba76e66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:13:48 compute-1 nova_compute[235132]: 2025-10-10 10:13:48.896 2 DEBUG oslo_concurrency.lockutils [req-e5e0429b-4860-455b-94ce-8a0645a1eba5 req-9b7f72ab-c27e-47a8-840d-6710dcf9a769 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:13:48 compute-1 nova_compute[235132]: 2025-10-10 10:13:48.896 2 DEBUG oslo_concurrency.lockutils [req-e5e0429b-4860-455b-94ce-8a0645a1eba5 req-9b7f72ab-c27e-47a8-840d-6710dcf9a769 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:13:48 compute-1 nova_compute[235132]: 2025-10-10 10:13:48.897 2 DEBUG oslo_concurrency.lockutils [req-e5e0429b-4860-455b-94ce-8a0645a1eba5 req-9b7f72ab-c27e-47a8-840d-6710dcf9a769 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:13:48 compute-1 nova_compute[235132]: 2025-10-10 10:13:48.897 2 DEBUG nova.compute.manager [req-e5e0429b-4860-455b-94ce-8a0645a1eba5 req-9b7f72ab-c27e-47a8-840d-6710dcf9a769 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Processing event network-vif-plugged-446b0e59-d2be-42d8-801f-7ba63ba76e66 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 10 10:13:48 compute-1 nova_compute[235132]: 2025-10-10 10:13:48.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.909 141156 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c8850c4c-dc38-4440-9c03-f2dd59684fe6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c8850c4c-dc38-4440-9c03-f2dd59684fe6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.909 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[a8767d74-714a-40cf-9711-59cfe5109b10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.911 141156 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]: global
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]:     log         /dev/log local0 debug
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]:     log-tag     haproxy-metadata-proxy-c8850c4c-dc38-4440-9c03-f2dd59684fe6
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]:     user        root
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]:     group       root
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]:     maxconn     1024
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]:     pidfile     /var/lib/neutron/external/pids/c8850c4c-dc38-4440-9c03-f2dd59684fe6.pid.haproxy
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]:     daemon
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]: 
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]: defaults
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]:     log global
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]:     mode http
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]:     option httplog
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]:     option dontlognull
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]:     option http-server-close
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]:     option forwardfor
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]:     retries                 3
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]:     timeout http-request    30s
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]:     timeout connect         30s
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]:     timeout client          32s
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]:     timeout server          32s
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]:     timeout http-keep-alive 30s
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]: 
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]: 
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]: listen listener
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]:     bind 169.254.169.254:80
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]:     server metadata /var/lib/neutron/metadata_proxy
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]:     http-request add-header X-OVN-Network-ID c8850c4c-dc38-4440-9c03-f2dd59684fe6
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 10 10:13:48 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:13:48.912 141156 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6', 'env', 'PROCESS_TAG=haproxy-c8850c4c-dc38-4440-9c03-f2dd59684fe6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c8850c4c-dc38-4440-9c03-f2dd59684fe6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 10 10:13:49 compute-1 nova_compute[235132]: 2025-10-10 10:13:49.039 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:13:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:49 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c0046c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:49 compute-1 podman[241600]: 2025-10-10 10:13:49.332305154 +0000 UTC m=+0.079300028 container create 45c916acf584203b7020f57ecd227df56e00ba5f26ca42ca74b61d77d56523b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:13:49 compute-1 podman[241600]: 2025-10-10 10:13:49.294024047 +0000 UTC m=+0.041018941 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 10 10:13:49 compute-1 systemd[1]: Started libpod-conmon-45c916acf584203b7020f57ecd227df56e00ba5f26ca42ca74b61d77d56523b2.scope.
Oct 10 10:13:49 compute-1 systemd[1]: Started libcrun container.
Oct 10 10:13:49 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c55767b3c231b50f03b73e902ec5a6e120dd175734d051879abbfb9aabc4097/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 10:13:49 compute-1 podman[241600]: 2025-10-10 10:13:49.445983223 +0000 UTC m=+0.192978127 container init 45c916acf584203b7020f57ecd227df56e00ba5f26ca42ca74b61d77d56523b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 10 10:13:49 compute-1 podman[241600]: 2025-10-10 10:13:49.452528131 +0000 UTC m=+0.199523005 container start 45c916acf584203b7020f57ecd227df56e00ba5f26ca42ca74b61d77d56523b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 10 10:13:49 compute-1 neutron-haproxy-ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6[241616]: [NOTICE]   (241620) : New worker (241629) forked
Oct 10 10:13:49 compute-1 neutron-haproxy-ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6[241616]: [NOTICE]   (241620) : Loading success.
Oct 10 10:13:49 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:49 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328004630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:49.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:50 compute-1 ceph-mon[79167]: pgmap v836: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 10 10:13:50 compute-1 nova_compute[235132]: 2025-10-10 10:13:50.125 2 DEBUG nova.compute.manager [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 10 10:13:50 compute-1 nova_compute[235132]: 2025-10-10 10:13:50.125 2 DEBUG nova.virt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Emitting event <LifecycleEvent: 1760091230.1245322, 12298a8d-d383-47da-91e4-0a918e153f1d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:13:50 compute-1 nova_compute[235132]: 2025-10-10 10:13:50.126 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] VM Started (Lifecycle Event)
Oct 10 10:13:50 compute-1 nova_compute[235132]: 2025-10-10 10:13:50.134 2 DEBUG nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 10 10:13:50 compute-1 nova_compute[235132]: 2025-10-10 10:13:50.138 2 INFO nova.virt.libvirt.driver [-] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Instance spawned successfully.
Oct 10 10:13:50 compute-1 nova_compute[235132]: 2025-10-10 10:13:50.139 2 DEBUG nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 10 10:13:50 compute-1 nova_compute[235132]: 2025-10-10 10:13:50.162 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:13:50 compute-1 nova_compute[235132]: 2025-10-10 10:13:50.169 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 10 10:13:50 compute-1 nova_compute[235132]: 2025-10-10 10:13:50.181 2 DEBUG nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:13:50 compute-1 nova_compute[235132]: 2025-10-10 10:13:50.182 2 DEBUG nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:13:50 compute-1 nova_compute[235132]: 2025-10-10 10:13:50.182 2 DEBUG nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:13:50 compute-1 nova_compute[235132]: 2025-10-10 10:13:50.183 2 DEBUG nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:13:50 compute-1 nova_compute[235132]: 2025-10-10 10:13:50.184 2 DEBUG nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:13:50 compute-1 nova_compute[235132]: 2025-10-10 10:13:50.185 2 DEBUG nova.virt.libvirt.driver [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:13:50 compute-1 nova_compute[235132]: 2025-10-10 10:13:50.222 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 10 10:13:50 compute-1 nova_compute[235132]: 2025-10-10 10:13:50.223 2 DEBUG nova.virt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Emitting event <LifecycleEvent: 1760091230.1280189, 12298a8d-d383-47da-91e4-0a918e153f1d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:13:50 compute-1 nova_compute[235132]: 2025-10-10 10:13:50.224 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] VM Paused (Lifecycle Event)
Oct 10 10:13:50 compute-1 nova_compute[235132]: 2025-10-10 10:13:50.264 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:13:50 compute-1 nova_compute[235132]: 2025-10-10 10:13:50.270 2 DEBUG nova.virt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Emitting event <LifecycleEvent: 1760091230.133719, 12298a8d-d383-47da-91e4-0a918e153f1d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:13:50 compute-1 nova_compute[235132]: 2025-10-10 10:13:50.270 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] VM Resumed (Lifecycle Event)
Oct 10 10:13:50 compute-1 nova_compute[235132]: 2025-10-10 10:13:50.284 2 INFO nova.compute.manager [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Took 8.09 seconds to spawn the instance on the hypervisor.
Oct 10 10:13:50 compute-1 nova_compute[235132]: 2025-10-10 10:13:50.285 2 DEBUG nova.compute.manager [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:13:50 compute-1 nova_compute[235132]: 2025-10-10 10:13:50.299 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:13:50 compute-1 nova_compute[235132]: 2025-10-10 10:13:50.304 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 10 10:13:50 compute-1 nova_compute[235132]: 2025-10-10 10:13:50.327 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 10 10:13:50 compute-1 nova_compute[235132]: 2025-10-10 10:13:50.365 2 INFO nova.compute.manager [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Took 9.17 seconds to build instance.
Oct 10 10:13:50 compute-1 nova_compute[235132]: 2025-10-10 10:13:50.386 2 DEBUG oslo_concurrency.lockutils [None req-1f9006f2-b15f-45c4-8c2f-20dc3d7c452f 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "12298a8d-d383-47da-91e4-0a918e153f1d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.268s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:13:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:13:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:50.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:13:50 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:50 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340001b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:50 compute-1 nova_compute[235132]: 2025-10-10 10:13:50.977 2 DEBUG nova.compute.manager [req-b1248f37-11bc-4179-87af-e0e083c285bd req-a929fe41-e305-4777-82ea-2f2e9b5f3dbc 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Received event network-vif-plugged-446b0e59-d2be-42d8-801f-7ba63ba76e66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:13:50 compute-1 nova_compute[235132]: 2025-10-10 10:13:50.978 2 DEBUG oslo_concurrency.lockutils [req-b1248f37-11bc-4179-87af-e0e083c285bd req-a929fe41-e305-4777-82ea-2f2e9b5f3dbc 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:13:50 compute-1 nova_compute[235132]: 2025-10-10 10:13:50.978 2 DEBUG oslo_concurrency.lockutils [req-b1248f37-11bc-4179-87af-e0e083c285bd req-a929fe41-e305-4777-82ea-2f2e9b5f3dbc 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:13:50 compute-1 nova_compute[235132]: 2025-10-10 10:13:50.979 2 DEBUG oslo_concurrency.lockutils [req-b1248f37-11bc-4179-87af-e0e083c285bd req-a929fe41-e305-4777-82ea-2f2e9b5f3dbc 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:13:50 compute-1 nova_compute[235132]: 2025-10-10 10:13:50.979 2 DEBUG nova.compute.manager [req-b1248f37-11bc-4179-87af-e0e083c285bd req-a929fe41-e305-4777-82ea-2f2e9b5f3dbc 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] No waiting events found dispatching network-vif-plugged-446b0e59-d2be-42d8-801f-7ba63ba76e66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:13:50 compute-1 nova_compute[235132]: 2025-10-10 10:13:50.980 2 WARNING nova.compute.manager [req-b1248f37-11bc-4179-87af-e0e083c285bd req-a929fe41-e305-4777-82ea-2f2e9b5f3dbc 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Received unexpected event network-vif-plugged-446b0e59-d2be-42d8-801f-7ba63ba76e66 for instance with vm_state active and task_state None.
Oct 10 10:13:51 compute-1 nova_compute[235132]: 2025-10-10 10:13:51.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:13:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:51 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:51 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:51 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c0046c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:52.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:52 compute-1 ceph-mon[79167]: pgmap v837: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct 10 10:13:52 compute-1 nova_compute[235132]: 2025-10-10 10:13:52.039 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:13:52 compute-1 nova_compute[235132]: 2025-10-10 10:13:52.066 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:13:52 compute-1 nova_compute[235132]: 2025-10-10 10:13:52.067 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:13:52 compute-1 nova_compute[235132]: 2025-10-10 10:13:52.068 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:13:52 compute-1 nova_compute[235132]: 2025-10-10 10:13:52.223 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "refresh_cache-12298a8d-d383-47da-91e4-0a918e153f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:13:52 compute-1 nova_compute[235132]: 2025-10-10 10:13:52.223 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquired lock "refresh_cache-12298a8d-d383-47da-91e4-0a918e153f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:13:52 compute-1 nova_compute[235132]: 2025-10-10 10:13:52.224 2 DEBUG nova.network.neutron [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 10 10:13:52 compute-1 nova_compute[235132]: 2025-10-10 10:13:52.224 2 DEBUG nova.objects.instance [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lazy-loading 'info_cache' on Instance uuid 12298a8d-d383-47da-91e4-0a918e153f1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:13:52 compute-1 nova_compute[235132]: 2025-10-10 10:13:52.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:13:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:52.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:13:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:13:52 compute-1 ovn_controller[131749]: 2025-10-10T10:13:52Z|00062|binding|INFO|Releasing lport 185907ee-d118-486d-93ad-c5a1b6a3a149 from this chassis (sb_readonly=0)
Oct 10 10:13:52 compute-1 NetworkManager[44982]: <info>  [1760091232.5414] manager: (patch-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Oct 10 10:13:52 compute-1 NetworkManager[44982]: <info>  [1760091232.5434] manager: (patch-br-int-to-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Oct 10 10:13:52 compute-1 nova_compute[235132]: 2025-10-10 10:13:52.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:52 compute-1 ovn_controller[131749]: 2025-10-10T10:13:52Z|00063|binding|INFO|Releasing lport 185907ee-d118-486d-93ad-c5a1b6a3a149 from this chassis (sb_readonly=0)
Oct 10 10:13:52 compute-1 nova_compute[235132]: 2025-10-10 10:13:52.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:52 compute-1 nova_compute[235132]: 2025-10-10 10:13:52.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:52 compute-1 nova_compute[235132]: 2025-10-10 10:13:52.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:52 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:52 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328004630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:53 compute-1 nova_compute[235132]: 2025-10-10 10:13:53.056 2 DEBUG nova.compute.manager [req-3dbbf8bf-b7f4-4d5e-a52a-4ca781505f54 req-7801e5b8-4805-400d-b9eb-0f954034b52d 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Received event network-changed-446b0e59-d2be-42d8-801f-7ba63ba76e66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:13:53 compute-1 nova_compute[235132]: 2025-10-10 10:13:53.057 2 DEBUG nova.compute.manager [req-3dbbf8bf-b7f4-4d5e-a52a-4ca781505f54 req-7801e5b8-4805-400d-b9eb-0f954034b52d 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Refreshing instance network info cache due to event network-changed-446b0e59-d2be-42d8-801f-7ba63ba76e66. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 10 10:13:53 compute-1 nova_compute[235132]: 2025-10-10 10:13:53.057 2 DEBUG oslo_concurrency.lockutils [req-3dbbf8bf-b7f4-4d5e-a52a-4ca781505f54 req-7801e5b8-4805-400d-b9eb-0f954034b52d 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-12298a8d-d383-47da-91e4-0a918e153f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:13:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:53 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5340001b90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:53 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:54.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:54 compute-1 ceph-mon[79167]: pgmap v838: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 10 10:13:54 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2920225613' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:13:54 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1533079177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:13:54 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/3365250461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:13:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:13:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:54.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:13:54 compute-1 nova_compute[235132]: 2025-10-10 10:13:54.625 2 DEBUG nova.network.neutron [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Updating instance_info_cache with network_info: [{"id": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "address": "fa:16:3e:9d:f6:71", "network": {"id": "c8850c4c-dc38-4440-9c03-f2dd59684fe6", "bridge": "br-int", "label": "tempest-network-smoke--781461532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446b0e59-d2", "ovs_interfaceid": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:13:54 compute-1 nova_compute[235132]: 2025-10-10 10:13:54.647 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Releasing lock "refresh_cache-12298a8d-d383-47da-91e4-0a918e153f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:13:54 compute-1 nova_compute[235132]: 2025-10-10 10:13:54.647 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 10 10:13:54 compute-1 nova_compute[235132]: 2025-10-10 10:13:54.647 2 DEBUG oslo_concurrency.lockutils [req-3dbbf8bf-b7f4-4d5e-a52a-4ca781505f54 req-7801e5b8-4805-400d-b9eb-0f954034b52d 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-12298a8d-d383-47da-91e4-0a918e153f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:13:54 compute-1 nova_compute[235132]: 2025-10-10 10:13:54.647 2 DEBUG nova.network.neutron [req-3dbbf8bf-b7f4-4d5e-a52a-4ca781505f54 req-7801e5b8-4805-400d-b9eb-0f954034b52d 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Refreshing network info cache for port 446b0e59-d2be-42d8-801f-7ba63ba76e66 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 10 10:13:54 compute-1 nova_compute[235132]: 2025-10-10 10:13:54.648 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:13:54 compute-1 nova_compute[235132]: 2025-10-10 10:13:54.649 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:13:54 compute-1 nova_compute[235132]: 2025-10-10 10:13:54.649 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:13:54 compute-1 nova_compute[235132]: 2025-10-10 10:13:54.649 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:13:54 compute-1 nova_compute[235132]: 2025-10-10 10:13:54.649 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:13:54 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:54 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c0046c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:54 compute-1 podman[241677]: 2025-10-10 10:13:54.977421934 +0000 UTC m=+0.071485794 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 10:13:54 compute-1 podman[241676]: 2025-10-10 10:13:54.992243869 +0000 UTC m=+0.082811824 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:13:55 compute-1 nova_compute[235132]: 2025-10-10 10:13:55.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:13:55 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1769663104' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:13:55 compute-1 nova_compute[235132]: 2025-10-10 10:13:55.071 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:13:55 compute-1 nova_compute[235132]: 2025-10-10 10:13:55.072 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:13:55 compute-1 nova_compute[235132]: 2025-10-10 10:13:55.072 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:13:55 compute-1 nova_compute[235132]: 2025-10-10 10:13:55.073 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:13:55 compute-1 nova_compute[235132]: 2025-10-10 10:13:55.073 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:13:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:55 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328004630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:55 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:13:55 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1438646568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:13:55 compute-1 nova_compute[235132]: 2025-10-10 10:13:55.513 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:13:55 compute-1 nova_compute[235132]: 2025-10-10 10:13:55.590 2 DEBUG nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 10 10:13:55 compute-1 nova_compute[235132]: 2025-10-10 10:13:55.591 2 DEBUG nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 10 10:13:55 compute-1 podman[241743]: 2025-10-10 10:13:55.672027765 +0000 UTC m=+0.108574239 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3)
Oct 10 10:13:55 compute-1 nova_compute[235132]: 2025-10-10 10:13:55.755 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:13:55 compute-1 nova_compute[235132]: 2025-10-10 10:13:55.756 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4763MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:13:55 compute-1 nova_compute[235132]: 2025-10-10 10:13:55.756 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:13:55 compute-1 nova_compute[235132]: 2025-10-10 10:13:55.757 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:13:55 compute-1 nova_compute[235132]: 2025-10-10 10:13:55.828 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Instance 12298a8d-d383-47da-91e4-0a918e153f1d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 10 10:13:55 compute-1 nova_compute[235132]: 2025-10-10 10:13:55.829 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:13:55 compute-1 nova_compute[235132]: 2025-10-10 10:13:55.829 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:13:55 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:55 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f534000b2d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:55 compute-1 nova_compute[235132]: 2025-10-10 10:13:55.878 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:13:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:13:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:56.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:13:56 compute-1 ceph-mon[79167]: pgmap v839: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 10 10:13:56 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1438646568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:13:56 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:13:56 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2710916285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:13:56 compute-1 nova_compute[235132]: 2025-10-10 10:13:56.368 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:13:56 compute-1 nova_compute[235132]: 2025-10-10 10:13:56.376 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:13:56 compute-1 nova_compute[235132]: 2025-10-10 10:13:56.400 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:13:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:56.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:56 compute-1 nova_compute[235132]: 2025-10-10 10:13:56.420 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:13:56 compute-1 nova_compute[235132]: 2025-10-10 10:13:56.421 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:13:56 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:56 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:57 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2710916285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:13:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:57 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c0046e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:57 compute-1 nova_compute[235132]: 2025-10-10 10:13:57.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:57 compute-1 nova_compute[235132]: 2025-10-10 10:13:57.421 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:13:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:13:57 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:57 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328004630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:57 compute-1 nova_compute[235132]: 2025-10-10 10:13:57.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:13:57 compute-1 nova_compute[235132]: 2025-10-10 10:13:57.925 2 DEBUG nova.network.neutron [req-3dbbf8bf-b7f4-4d5e-a52a-4ca781505f54 req-7801e5b8-4805-400d-b9eb-0f954034b52d 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Updated VIF entry in instance network info cache for port 446b0e59-d2be-42d8-801f-7ba63ba76e66. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 10 10:13:57 compute-1 nova_compute[235132]: 2025-10-10 10:13:57.926 2 DEBUG nova.network.neutron [req-3dbbf8bf-b7f4-4d5e-a52a-4ca781505f54 req-7801e5b8-4805-400d-b9eb-0f954034b52d 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Updating instance_info_cache with network_info: [{"id": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "address": "fa:16:3e:9d:f6:71", "network": {"id": "c8850c4c-dc38-4440-9c03-f2dd59684fe6", "bridge": "br-int", "label": "tempest-network-smoke--781461532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.181", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446b0e59-d2", "ovs_interfaceid": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:13:57 compute-1 nova_compute[235132]: 2025-10-10 10:13:57.944 2 DEBUG oslo_concurrency.lockutils [req-3dbbf8bf-b7f4-4d5e-a52a-4ca781505f54 req-7801e5b8-4805-400d-b9eb-0f954034b52d 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-12298a8d-d383-47da-91e4-0a918e153f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:13:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:13:58.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:58 compute-1 ceph-mon[79167]: pgmap v840: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 10 10:13:58 compute-1 sudo[241792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:13:58 compute-1 sudo[241792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:13:58 compute-1 sudo[241792]: pam_unix(sudo:session): session closed for user root
Oct 10 10:13:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:13:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:13:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:13:58.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:13:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:58 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f534000b2d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:59 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:13:59 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:13:59 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004700 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:14:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:00.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:00 compute-1 ceph-mon[79167]: pgmap v841: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct 10 10:14:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:14:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:00.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:14:00 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:14:00 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328004630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:14:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:14:01 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f534000b2d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:14:01 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:14:01 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:14:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:14:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:02.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:14:02 compute-1 ceph-mon[79167]: pgmap v842: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct 10 10:14:02 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:14:02 compute-1 nova_compute[235132]: 2025-10-10 10:14:02.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:02 compute-1 ovn_controller[131749]: 2025-10-10T10:14:02Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9d:f6:71 10.100.0.14
Oct 10 10:14:02 compute-1 ovn_controller[131749]: 2025-10-10T10:14:02Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9d:f6:71 10.100.0.14
Oct 10 10:14:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:14:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:02.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:14:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:14:02 compute-1 nova_compute[235132]: 2025-10-10 10:14:02.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:02 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:14:02 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004720 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:14:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:14:03 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5314004600 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:14:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:14:03 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_24] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328004630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:14:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:04.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:04 compute-1 ceph-mon[79167]: pgmap v843: 353 pgs: 353 active+clean; 116 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 126 op/s
Oct 10 10:14:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:04.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:04 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:14:04 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f534000b2d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:14:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:14:05 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004740 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:14:05 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:14:05 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5320001e70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:14:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:06.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:06 compute-1 ceph-mon[79167]: pgmap v844: 353 pgs: 353 active+clean; 116 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 159 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Oct 10 10:14:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:06.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:06 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:14:06 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_27] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5328004630 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:14:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:14:07 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f534000b2d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 10 10:14:07 compute-1 nova_compute[235132]: 2025-10-10 10:14:07.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:14:07 compute-1 nova_compute[235132]: 2025-10-10 10:14:07.751 2 INFO nova.compute.manager [None req-80302de6-4e29-4b7e-86e2-2dcb26b7c51a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Get console output
Oct 10 10:14:07 compute-1 nova_compute[235132]: 2025-10-10 10:14:07.756 631 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 10 10:14:07 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[238138]: 10/10/2025 10:14:07 : epoch 68e8db7a : compute-1 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f532c004760 fd 39 proxy ignored for local
Oct 10 10:14:07 compute-1 kernel: ganesha.nfsd[239748]: segfault at 50 ip 00007f53f66a832e sp 00007f53b7ffe210 error 4 in libntirpc.so.5.8[7f53f668d000+2c000] likely on CPU 1 (core 0, socket 1)
Oct 10 10:14:07 compute-1 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 10 10:14:07 compute-1 nova_compute[235132]: 2025-10-10 10:14:07.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:07 compute-1 systemd[1]: Started Process Core Dump (PID 241823/UID 0).
Oct 10 10:14:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:14:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:08.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:14:08 compute-1 ceph-mon[79167]: pgmap v845: 353 pgs: 353 active+clean; 116 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 159 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Oct 10 10:14:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:14:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:08.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:14:08 compute-1 systemd-coredump[241824]: Process 238142 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 64:
                                                    #0  0x00007f53f66a832e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Oct 10 10:14:09 compute-1 systemd[1]: systemd-coredump@12-241823-0.service: Deactivated successfully.
Oct 10 10:14:09 compute-1 systemd[1]: systemd-coredump@12-241823-0.service: Consumed 1.136s CPU time.
Oct 10 10:14:09 compute-1 podman[241829]: 2025-10-10 10:14:09.159994465 +0000 UTC m=+0.029738994 container died 6546b2fcd1fe6d157439251f6fbf77cef47e24b9f982b7fd6618f23cf4621080 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct 10 10:14:09 compute-1 systemd[1]: var-lib-containers-storage-overlay-f6db3e4192f921f61bedae65edfc04d05878ec5c3891f666841a8bdf974350fc-merged.mount: Deactivated successfully.
Oct 10 10:14:09 compute-1 podman[241829]: 2025-10-10 10:14:09.205670254 +0000 UTC m=+0.075414763 container remove 6546b2fcd1fe6d157439251f6fbf77cef47e24b9f982b7fd6618f23cf4621080 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 10 10:14:09 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Main process exited, code=exited, status=139/n/a
Oct 10 10:14:09 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Failed with result 'exit-code'.
Oct 10 10:14:09 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 2.053s CPU time.
Oct 10 10:14:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:10.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:10 compute-1 ceph-mon[79167]: pgmap v846: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 187 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Oct 10 10:14:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:10.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:11 compute-1 nova_compute[235132]: 2025-10-10 10:14:11.000 2 DEBUG nova.compute.manager [req-69057401-35f3-41f2-a03a-294a1c6a7d44 req-01f47fe5-3be8-4db6-a610-dc1f6e8a06b0 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Received event network-changed-446b0e59-d2be-42d8-801f-7ba63ba76e66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:14:11 compute-1 nova_compute[235132]: 2025-10-10 10:14:11.000 2 DEBUG nova.compute.manager [req-69057401-35f3-41f2-a03a-294a1c6a7d44 req-01f47fe5-3be8-4db6-a610-dc1f6e8a06b0 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Refreshing instance network info cache due to event network-changed-446b0e59-d2be-42d8-801f-7ba63ba76e66. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 10 10:14:11 compute-1 nova_compute[235132]: 2025-10-10 10:14:11.000 2 DEBUG oslo_concurrency.lockutils [req-69057401-35f3-41f2-a03a-294a1c6a7d44 req-01f47fe5-3be8-4db6-a610-dc1f6e8a06b0 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-12298a8d-d383-47da-91e4-0a918e153f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:14:11 compute-1 nova_compute[235132]: 2025-10-10 10:14:11.000 2 DEBUG oslo_concurrency.lockutils [req-69057401-35f3-41f2-a03a-294a1c6a7d44 req-01f47fe5-3be8-4db6-a610-dc1f6e8a06b0 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-12298a8d-d383-47da-91e4-0a918e153f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:14:11 compute-1 nova_compute[235132]: 2025-10-10 10:14:11.001 2 DEBUG nova.network.neutron [req-69057401-35f3-41f2-a03a-294a1c6a7d44 req-01f47fe5-3be8-4db6-a610-dc1f6e8a06b0 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Refreshing network info cache for port 446b0e59-d2be-42d8-801f-7ba63ba76e66 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 10 10:14:11 compute-1 podman[241875]: 2025-10-10 10:14:11.971349449 +0000 UTC m=+0.068711829 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:14:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:14:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:12.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:14:12 compute-1 ceph-mon[79167]: pgmap v847: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 187 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Oct 10 10:14:12 compute-1 nova_compute[235132]: 2025-10-10 10:14:12.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:14:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:12.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:12 compute-1 nova_compute[235132]: 2025-10-10 10:14:12.795 2 DEBUG nova.network.neutron [req-69057401-35f3-41f2-a03a-294a1c6a7d44 req-01f47fe5-3be8-4db6-a610-dc1f6e8a06b0 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Updated VIF entry in instance network info cache for port 446b0e59-d2be-42d8-801f-7ba63ba76e66. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 10 10:14:12 compute-1 nova_compute[235132]: 2025-10-10 10:14:12.796 2 DEBUG nova.network.neutron [req-69057401-35f3-41f2-a03a-294a1c6a7d44 req-01f47fe5-3be8-4db6-a610-dc1f6e8a06b0 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Updating instance_info_cache with network_info: [{"id": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "address": "fa:16:3e:9d:f6:71", "network": {"id": "c8850c4c-dc38-4440-9c03-f2dd59684fe6", "bridge": "br-int", "label": "tempest-network-smoke--781461532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446b0e59-d2", "ovs_interfaceid": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:14:12 compute-1 nova_compute[235132]: 2025-10-10 10:14:12.818 2 DEBUG oslo_concurrency.lockutils [req-69057401-35f3-41f2-a03a-294a1c6a7d44 req-01f47fe5-3be8-4db6-a610-dc1f6e8a06b0 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-12298a8d-d383-47da-91e4-0a918e153f1d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:14:12 compute-1 nova_compute[235132]: 2025-10-10 10:14:12.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/101413 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:14:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:14.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:14 compute-1 ceph-mon[79167]: pgmap v848: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 192 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Oct 10 10:14:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:14.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:16.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:16 compute-1 ceph-mon[79167]: pgmap v849: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 35 KiB/s wr, 5 op/s
Oct 10 10:14:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:14:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:16.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:14:17 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:14:17 compute-1 nova_compute[235132]: 2025-10-10 10:14:17.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:14:17 compute-1 nova_compute[235132]: 2025-10-10 10:14:17.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:14:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:18.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:14:18 compute-1 ceph-mon[79167]: pgmap v850: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 35 KiB/s wr, 5 op/s
Oct 10 10:14:18 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/3515183494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:14:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:14:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:18.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:14:18 compute-1 sudo[241898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:14:18 compute-1 sudo[241898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:14:18 compute-1 sudo[241898]: pam_unix(sudo:session): session closed for user root
Oct 10 10:14:19 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Scheduled restart job, restart counter is at 13.
Oct 10 10:14:19 compute-1 systemd[1]: Stopped Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:14:19 compute-1 systemd[1]: ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4@nfs.cephfs.0.0.compute-1.mssvzx.service: Consumed 2.053s CPU time.
Oct 10 10:14:19 compute-1 systemd[1]: Starting Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4...
Oct 10 10:14:19 compute-1 podman[241974]: 2025-10-10 10:14:19.989760629 +0000 UTC m=+0.079447274 container create d3cf84749d9f2f04e4804a4d648101430763ca38f98380504c4e60979dc43596 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 10 10:14:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:14:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:20.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:14:20 compute-1 podman[241974]: 2025-10-10 10:14:19.954452993 +0000 UTC m=+0.044139678 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:14:20 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/906424582df63a0dcd4754b49e316089430f2062271469aac676efa80cd3183f/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 10 10:14:20 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/906424582df63a0dcd4754b49e316089430f2062271469aac676efa80cd3183f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:14:20 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/906424582df63a0dcd4754b49e316089430f2062271469aac676efa80cd3183f/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:14:20 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/906424582df63a0dcd4754b49e316089430f2062271469aac676efa80cd3183f/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.0.0.compute-1.mssvzx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 10 10:14:20 compute-1 podman[241974]: 2025-10-10 10:14:20.108556347 +0000 UTC m=+0.198243372 container init d3cf84749d9f2f04e4804a4d648101430763ca38f98380504c4e60979dc43596 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 10 10:14:20 compute-1 podman[241974]: 2025-10-10 10:14:20.11747771 +0000 UTC m=+0.207164325 container start d3cf84749d9f2f04e4804a4d648101430763ca38f98380504c4e60979dc43596 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 10 10:14:20 compute-1 bash[241974]: d3cf84749d9f2f04e4804a4d648101430763ca38f98380504c4e60979dc43596
Oct 10 10:14:20 compute-1 systemd[1]: Started Ceph nfs.cephfs.0.0.compute-1.mssvzx for 21f084a3-af34-5230-afe4-ea5cd24a55f4.
Oct 10 10:14:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:20 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 10 10:14:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:20 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 10 10:14:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:20 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 10 10:14:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:20 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 10 10:14:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:20 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 10 10:14:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:20 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 10 10:14:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:20 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 10 10:14:20 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:20 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:14:20 compute-1 ceph-mon[79167]: pgmap v851: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 39 KiB/s wr, 5 op/s
Oct 10 10:14:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:20.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:14:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:22.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:14:22 compute-1 ceph-mon[79167]: pgmap v852: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 5.1 KiB/s rd, 16 KiB/s wr, 0 op/s
Oct 10 10:14:22 compute-1 nova_compute[235132]: 2025-10-10 10:14:22.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:14:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:22.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:22 compute-1 nova_compute[235132]: 2025-10-10 10:14:22.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:23 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1953104862' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:14:23 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1711456005' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:14:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:14:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:24.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:14:24 compute-1 ceph-mon[79167]: pgmap v853: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 35 op/s
Oct 10 10:14:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:14:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:24.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:14:25 compute-1 ceph-mon[79167]: pgmap v854: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 10 10:14:26 compute-1 podman[242036]: 2025-10-10 10:14:26.015568986 +0000 UTC m=+0.102966076 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 10 10:14:26 compute-1 podman[242035]: 2025-10-10 10:14:26.019407521 +0000 UTC m=+0.110138472 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 10 10:14:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:14:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:26.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:14:26 compute-1 podman[242037]: 2025-10-10 10:14:26.054377058 +0000 UTC m=+0.135017634 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Oct 10 10:14:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:26 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:14:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:26 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:14:26 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:26 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:14:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:26.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:26 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/2445054076' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:14:26 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/2445054076' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:14:27 compute-1 nova_compute[235132]: 2025-10-10 10:14:27.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:14:27 compute-1 ceph-mon[79167]: pgmap v855: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 10 10:14:27 compute-1 nova_compute[235132]: 2025-10-10 10:14:27.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:14:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:28.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:14:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:28.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:30 compute-1 ceph-mon[79167]: pgmap v856: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Oct 10 10:14:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:30.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:14:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:30.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:14:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:30 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:14:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:30 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:14:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:30 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:14:31 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:31 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:14:32 compute-1 ceph-mon[79167]: pgmap v857: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Oct 10 10:14:32 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:14:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:32.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:32 compute-1 nova_compute[235132]: 2025-10-10 10:14:32.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:14:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:32.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:32 compute-1 nova_compute[235132]: 2025-10-10 10:14:32.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:34 compute-1 ceph-mon[79167]: pgmap v858: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Oct 10 10:14:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:34.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:34.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:34 compute-1 sudo[242100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:14:34 compute-1 sudo[242100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:14:34 compute-1 sudo[242100]: pam_unix(sudo:session): session closed for user root
Oct 10 10:14:34 compute-1 sudo[242125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Oct 10 10:14:34 compute-1 sudo[242125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:14:34 compute-1 sudo[242125]: pam_unix(sudo:session): session closed for user root
Oct 10 10:14:34 compute-1 sudo[242170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:14:35 compute-1 sudo[242170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:14:35 compute-1 sudo[242170]: pam_unix(sudo:session): session closed for user root
Oct 10 10:14:35 compute-1 sudo[242195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:14:35 compute-1 sudo[242195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:14:35 compute-1 sudo[242195]: pam_unix(sudo:session): session closed for user root
Oct 10 10:14:35 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:14:35 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:14:35 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:14:35 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:14:35 compute-1 ceph-mon[79167]: pgmap v859: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Oct 10 10:14:35 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:14:35 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:14:35 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:14:35 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:14:35 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:14:35 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:14:35 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:14:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:35 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:14:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:35 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:14:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:35 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:14:36 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:36 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:14:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.002000054s ======
Oct 10 10:14:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:36.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Oct 10 10:14:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:36.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:37 compute-1 nova_compute[235132]: 2025-10-10 10:14:37.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:14:37 compute-1 nova_compute[235132]: 2025-10-10 10:14:37.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:38 compute-1 ceph-mon[79167]: pgmap v860: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Oct 10 10:14:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:38.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:14:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:38.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:14:38 compute-1 sudo[242253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:14:38 compute-1 sudo[242253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:14:38 compute-1 sudo[242253]: pam_unix(sudo:session): session closed for user root
Oct 10 10:14:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [WARNING] 282/101439 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 10 10:14:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [NOTICE] 282/101439 (4) : haproxy version is 2.3.17-d1c9119
Oct 10 10:14:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [NOTICE] 282/101439 (4) : path to executable is /usr/local/sbin/haproxy
Oct 10 10:14:39 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw[85670]: [ALERT] 282/101439 (4) : backend 'backend' has no server available!
Oct 10 10:14:40 compute-1 ceph-mon[79167]: pgmap v861: 353 pgs: 353 active+clean; 188 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 111 op/s
Oct 10 10:14:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:14:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:40.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:14:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:40.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:40 compute-1 sudo[242279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:14:40 compute-1 sudo[242279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:14:40 compute-1 sudo[242279]: pam_unix(sudo:session): session closed for user root
Oct 10 10:14:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:40 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:14:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:40 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:14:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:40 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:14:41 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:41 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:14:41 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:14:41 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:14:41 compute-1 ceph-mon[79167]: pgmap v862: 353 pgs: 353 active+clean; 188 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 770 KiB/s rd, 2.0 MiB/s wr, 54 op/s
Oct 10 10:14:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:42.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:14:42.209 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:14:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:14:42.210 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:14:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:14:42.211 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:14:42 compute-1 nova_compute[235132]: 2025-10-10 10:14:42.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:14:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:14:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:42.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:14:42 compute-1 nova_compute[235132]: 2025-10-10 10:14:42.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:42 compute-1 podman[242305]: 2025-10-10 10:14:42.966381 +0000 UTC m=+0.060977959 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 10 10:14:44 compute-1 ceph-mon[79167]: pgmap v863: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 917 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Oct 10 10:14:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:44.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:44.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:45 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:14:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:45 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:14:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:45 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:14:45 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:45 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:14:45 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:14:45.776 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:14:45 compute-1 nova_compute[235132]: 2025-10-10 10:14:45.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:45 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:14:45.779 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 10 10:14:46 compute-1 ceph-mon[79167]: pgmap v864: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:14:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:46.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:46.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:47 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0.
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:14:47.064699) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091287064733, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2348, "num_deletes": 251, "total_data_size": 6162003, "memory_usage": 6261280, "flush_reason": "Manual Compaction"}
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091287083850, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 3994523, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25994, "largest_seqno": 28337, "table_properties": {"data_size": 3985243, "index_size": 5774, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19477, "raw_average_key_size": 20, "raw_value_size": 3966615, "raw_average_value_size": 4119, "num_data_blocks": 254, "num_entries": 963, "num_filter_entries": 963, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760091080, "oldest_key_time": 1760091080, "file_creation_time": 1760091287, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 19249 microseconds, and 9032 cpu microseconds.
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:14:47.083937) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 3994523 bytes OK
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:14:47.083972) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:14:47.092747) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:14:47.092776) EVENT_LOG_v1 {"time_micros": 1760091287092767, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:14:47.092807) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 6151592, prev total WAL file size 6151592, number of live WAL files 2.
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:14:47.095547) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(3900KB)], [51(11MB)]
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091287095628, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 16448022, "oldest_snapshot_seqno": -1}
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 5814 keys, 14326170 bytes, temperature: kUnknown
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091287157599, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 14326170, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14286918, "index_size": 23590, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14597, "raw_key_size": 147756, "raw_average_key_size": 25, "raw_value_size": 14181498, "raw_average_value_size": 2439, "num_data_blocks": 964, "num_entries": 5814, "num_filter_entries": 5814, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760091287, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:14:47.157918) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 14326170 bytes
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:14:47.159388) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 265.0 rd, 230.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 11.9 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 6332, records dropped: 518 output_compression: NoCompression
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:14:47.159405) EVENT_LOG_v1 {"time_micros": 1760091287159396, "job": 30, "event": "compaction_finished", "compaction_time_micros": 62064, "compaction_time_cpu_micros": 41192, "output_level": 6, "num_output_files": 1, "total_output_size": 14326170, "num_input_records": 6332, "num_output_records": 5814, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091287160292, "job": 30, "event": "table_file_deletion", "file_number": 53}
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091287162449, "job": 30, "event": "table_file_deletion", "file_number": 51}
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:14:47.095460) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:14:47.162494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:14:47.162501) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:14:47.162503) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:14:47.162505) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:14:47 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:14:47.162507) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:14:47 compute-1 nova_compute[235132]: 2025-10-10 10:14:47.327 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:14:47 compute-1 nova_compute[235132]: 2025-10-10 10:14:47.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:48.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:48 compute-1 ceph-mon[79167]: pgmap v865: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:14:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:48.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:14:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:14:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:14:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:14:49 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1817080526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:14:49 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2226505712' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:14:49 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:14:49.784 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ee0899c1-415d-4aa8-abe8-1240b4e8bf2c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:14:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:14:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:50.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:14:50 compute-1 ceph-mon[79167]: pgmap v866: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Oct 10 10:14:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:50.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:51 compute-1 nova_compute[235132]: 2025-10-10 10:14:51.040 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:14:51 compute-1 nova_compute[235132]: 2025-10-10 10:14:51.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:14:52 compute-1 nova_compute[235132]: 2025-10-10 10:14:52.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:14:52 compute-1 nova_compute[235132]: 2025-10-10 10:14:52.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:14:52 compute-1 nova_compute[235132]: 2025-10-10 10:14:52.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:14:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:52.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:52 compute-1 ceph-mon[79167]: pgmap v867: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 167 KiB/s rd, 109 KiB/s wr, 57 op/s
Oct 10 10:14:52 compute-1 nova_compute[235132]: 2025-10-10 10:14:52.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:14:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:52.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:52 compute-1 nova_compute[235132]: 2025-10-10 10:14:52.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:14:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:14:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:14:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:14:53 compute-1 nova_compute[235132]: 2025-10-10 10:14:53.193 2 DEBUG oslo_concurrency.lockutils [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "12298a8d-d383-47da-91e4-0a918e153f1d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:14:53 compute-1 nova_compute[235132]: 2025-10-10 10:14:53.194 2 DEBUG oslo_concurrency.lockutils [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "12298a8d-d383-47da-91e4-0a918e153f1d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:14:53 compute-1 nova_compute[235132]: 2025-10-10 10:14:53.194 2 DEBUG oslo_concurrency.lockutils [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:14:53 compute-1 nova_compute[235132]: 2025-10-10 10:14:53.194 2 DEBUG oslo_concurrency.lockutils [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:14:53 compute-1 nova_compute[235132]: 2025-10-10 10:14:53.194 2 DEBUG oslo_concurrency.lockutils [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:14:53 compute-1 nova_compute[235132]: 2025-10-10 10:14:53.196 2 INFO nova.compute.manager [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Terminating instance
Oct 10 10:14:53 compute-1 nova_compute[235132]: 2025-10-10 10:14:53.196 2 DEBUG nova.compute.manager [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 10 10:14:53 compute-1 kernel: tap446b0e59-d2 (unregistering): left promiscuous mode
Oct 10 10:14:53 compute-1 NetworkManager[44982]: <info>  [1760091293.2488] device (tap446b0e59-d2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 10:14:53 compute-1 nova_compute[235132]: 2025-10-10 10:14:53.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:53 compute-1 ovn_controller[131749]: 2025-10-10T10:14:53Z|00064|binding|INFO|Releasing lport 446b0e59-d2be-42d8-801f-7ba63ba76e66 from this chassis (sb_readonly=0)
Oct 10 10:14:53 compute-1 ovn_controller[131749]: 2025-10-10T10:14:53Z|00065|binding|INFO|Setting lport 446b0e59-d2be-42d8-801f-7ba63ba76e66 down in Southbound
Oct 10 10:14:53 compute-1 ovn_controller[131749]: 2025-10-10T10:14:53Z|00066|binding|INFO|Removing iface tap446b0e59-d2 ovn-installed in OVS
Oct 10 10:14:53 compute-1 nova_compute[235132]: 2025-10-10 10:14:53.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:53 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:14:53.265 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:f6:71 10.100.0.14'], port_security=['fa:16:3e:9d:f6:71 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '12298a8d-d383-47da-91e4-0a918e153f1d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c8850c4c-dc38-4440-9c03-f2dd59684fe6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e222deba-0df5-4a21-bff7-930fc17b2ea1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21e2152f-e965-46e3-9774-988f8fdf189b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], logical_port=446b0e59-d2be-42d8-801f-7ba63ba76e66) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:14:53 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:14:53.267 141156 INFO neutron.agent.ovn.metadata.agent [-] Port 446b0e59-d2be-42d8-801f-7ba63ba76e66 in datapath c8850c4c-dc38-4440-9c03-f2dd59684fe6 unbound from our chassis
Oct 10 10:14:53 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:14:53.267 141156 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c8850c4c-dc38-4440-9c03-f2dd59684fe6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 10 10:14:53 compute-1 nova_compute[235132]: 2025-10-10 10:14:53.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:53 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:14:53.273 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[1d723388-9917-4a89-a30e-9bf4d877ad18]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:14:53 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:14:53.274 141156 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6 namespace which is not needed anymore
Oct 10 10:14:53 compute-1 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000004.scope: Deactivated successfully.
Oct 10 10:14:53 compute-1 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000004.scope: Consumed 16.377s CPU time.
Oct 10 10:14:53 compute-1 systemd-machined[191637]: Machine qemu-3-instance-00000004 terminated.
Oct 10 10:14:53 compute-1 neutron-haproxy-ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6[241616]: [NOTICE]   (241620) : haproxy version is 2.8.14-c23fe91
Oct 10 10:14:53 compute-1 neutron-haproxy-ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6[241616]: [NOTICE]   (241620) : path to executable is /usr/sbin/haproxy
Oct 10 10:14:53 compute-1 neutron-haproxy-ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6[241616]: [WARNING]  (241620) : Exiting Master process...
Oct 10 10:14:53 compute-1 neutron-haproxy-ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6[241616]: [WARNING]  (241620) : Exiting Master process...
Oct 10 10:14:53 compute-1 neutron-haproxy-ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6[241616]: [ALERT]    (241620) : Current worker (241629) exited with code 143 (Terminated)
Oct 10 10:14:53 compute-1 neutron-haproxy-ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6[241616]: [WARNING]  (241620) : All workers exited. Exiting... (0)
Oct 10 10:14:53 compute-1 systemd[1]: libpod-45c916acf584203b7020f57ecd227df56e00ba5f26ca42ca74b61d77d56523b2.scope: Deactivated successfully.
Oct 10 10:14:53 compute-1 nova_compute[235132]: 2025-10-10 10:14:53.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:53 compute-1 conmon[241616]: conmon 45c916acf584203b7020 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-45c916acf584203b7020f57ecd227df56e00ba5f26ca42ca74b61d77d56523b2.scope/container/memory.events
Oct 10 10:14:53 compute-1 nova_compute[235132]: 2025-10-10 10:14:53.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:53 compute-1 podman[242353]: 2025-10-10 10:14:53.42307073 +0000 UTC m=+0.052233613 container died 45c916acf584203b7020f57ecd227df56e00ba5f26ca42ca74b61d77d56523b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 10 10:14:53 compute-1 nova_compute[235132]: 2025-10-10 10:14:53.434 2 INFO nova.virt.libvirt.driver [-] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Instance destroyed successfully.
Oct 10 10:14:53 compute-1 nova_compute[235132]: 2025-10-10 10:14:53.435 2 DEBUG nova.objects.instance [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'resources' on Instance uuid 12298a8d-d383-47da-91e4-0a918e153f1d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:14:53 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-45c916acf584203b7020f57ecd227df56e00ba5f26ca42ca74b61d77d56523b2-userdata-shm.mount: Deactivated successfully.
Oct 10 10:14:53 compute-1 systemd[1]: var-lib-containers-storage-overlay-7c55767b3c231b50f03b73e902ec5a6e120dd175734d051879abbfb9aabc4097-merged.mount: Deactivated successfully.
Oct 10 10:14:53 compute-1 nova_compute[235132]: 2025-10-10 10:14:53.472 2 DEBUG nova.virt.libvirt.vif [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:13:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-742591551',display_name='tempest-TestNetworkBasicOps-server-742591551',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-742591551',id=4,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDCoryMMDZ6cZj1EAzGK4muKCZLgNsQyPcigwS48pCfmWHQQLrGNGrCkXZ7qqZSzWLyfX4m7fzgUMEko2IR4dU9srCI10SLqm/ZSwQK7hB66f+rf62WEii+W4TMQEFu9vA==',key_name='tempest-TestNetworkBasicOps-766718028',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:13:50Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-svhla3ss',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:13:50Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=12298a8d-d383-47da-91e4-0a918e153f1d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "address": "fa:16:3e:9d:f6:71", "network": {"id": "c8850c4c-dc38-4440-9c03-f2dd59684fe6", "bridge": "br-int", "label": "tempest-network-smoke--781461532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446b0e59-d2", "ovs_interfaceid": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 10 10:14:53 compute-1 nova_compute[235132]: 2025-10-10 10:14:53.474 2 DEBUG nova.network.os_vif_util [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "address": "fa:16:3e:9d:f6:71", "network": {"id": "c8850c4c-dc38-4440-9c03-f2dd59684fe6", "bridge": "br-int", "label": "tempest-network-smoke--781461532", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap446b0e59-d2", "ovs_interfaceid": "446b0e59-d2be-42d8-801f-7ba63ba76e66", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:14:53 compute-1 nova_compute[235132]: 2025-10-10 10:14:53.475 2 DEBUG nova.network.os_vif_util [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9d:f6:71,bridge_name='br-int',has_traffic_filtering=True,id=446b0e59-d2be-42d8-801f-7ba63ba76e66,network=Network(c8850c4c-dc38-4440-9c03-f2dd59684fe6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap446b0e59-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:14:53 compute-1 podman[242353]: 2025-10-10 10:14:53.476606687 +0000 UTC m=+0.105769580 container cleanup 45c916acf584203b7020f57ecd227df56e00ba5f26ca42ca74b61d77d56523b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:14:53 compute-1 nova_compute[235132]: 2025-10-10 10:14:53.476 2 DEBUG os_vif [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:f6:71,bridge_name='br-int',has_traffic_filtering=True,id=446b0e59-d2be-42d8-801f-7ba63ba76e66,network=Network(c8850c4c-dc38-4440-9c03-f2dd59684fe6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap446b0e59-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 10 10:14:53 compute-1 nova_compute[235132]: 2025-10-10 10:14:53.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:53 compute-1 nova_compute[235132]: 2025-10-10 10:14:53.479 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap446b0e59-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:14:53 compute-1 nova_compute[235132]: 2025-10-10 10:14:53.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:53 compute-1 nova_compute[235132]: 2025-10-10 10:14:53.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:53 compute-1 nova_compute[235132]: 2025-10-10 10:14:53.488 2 INFO os_vif [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:f6:71,bridge_name='br-int',has_traffic_filtering=True,id=446b0e59-d2be-42d8-801f-7ba63ba76e66,network=Network(c8850c4c-dc38-4440-9c03-f2dd59684fe6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap446b0e59-d2')
Oct 10 10:14:53 compute-1 systemd[1]: libpod-conmon-45c916acf584203b7020f57ecd227df56e00ba5f26ca42ca74b61d77d56523b2.scope: Deactivated successfully.
Oct 10 10:14:53 compute-1 podman[242397]: 2025-10-10 10:14:53.558695418 +0000 UTC m=+0.051512604 container remove 45c916acf584203b7020f57ecd227df56e00ba5f26ca42ca74b61d77d56523b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 10:14:53 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:14:53.568 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[c54ab7e9-06be-4564-a6a8-53ef45e36e5c]: (4, ('Fri Oct 10 10:14:53 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6 (45c916acf584203b7020f57ecd227df56e00ba5f26ca42ca74b61d77d56523b2)\n45c916acf584203b7020f57ecd227df56e00ba5f26ca42ca74b61d77d56523b2\nFri Oct 10 10:14:53 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6 (45c916acf584203b7020f57ecd227df56e00ba5f26ca42ca74b61d77d56523b2)\n45c916acf584203b7020f57ecd227df56e00ba5f26ca42ca74b61d77d56523b2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:14:53 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:14:53.571 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[b8db67d3-0f3c-48cf-96b5-f6ab2a8374ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:14:53 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:14:53.572 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8850c4c-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:14:53 compute-1 kernel: tapc8850c4c-d0: left promiscuous mode
Oct 10 10:14:53 compute-1 nova_compute[235132]: 2025-10-10 10:14:53.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:53 compute-1 nova_compute[235132]: 2025-10-10 10:14:53.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:53 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:14:53.592 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[95a4b513-4db0-4109-8056-83dd8d557ea7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:14:53 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:14:53.613 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[8e1f07d8-f92f-41a6-b189-223fa3805669]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:14:53 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:14:53.615 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[94c943a5-cc2a-4986-987e-82c796536604]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:14:53 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:14:53.628 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[5cda8a65-cae0-4ac7-8c8e-a9a2fb1f6eff]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 414233, 'reachable_time': 32950, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242428, 'error': None, 'target': 'ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:14:53 compute-1 systemd[1]: run-netns-ovnmeta\x2dc8850c4c\x2ddc38\x2d4440\x2d9c03\x2df2dd59684fe6.mount: Deactivated successfully.
Oct 10 10:14:53 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:14:53.634 141275 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c8850c4c-dc38-4440-9c03-f2dd59684fe6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 10 10:14:53 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:14:53.634 141275 DEBUG oslo.privsep.daemon [-] privsep: reply[5c6f4ef2-93b4-4360-8101-e0b9a2cb7e30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:14:53 compute-1 nova_compute[235132]: 2025-10-10 10:14:53.913 2 INFO nova.virt.libvirt.driver [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Deleting instance files /var/lib/nova/instances/12298a8d-d383-47da-91e4-0a918e153f1d_del
Oct 10 10:14:53 compute-1 nova_compute[235132]: 2025-10-10 10:14:53.914 2 INFO nova.virt.libvirt.driver [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Deletion of /var/lib/nova/instances/12298a8d-d383-47da-91e4-0a918e153f1d_del complete
Oct 10 10:14:54 compute-1 nova_compute[235132]: 2025-10-10 10:14:54.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:14:54 compute-1 nova_compute[235132]: 2025-10-10 10:14:54.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:14:54 compute-1 nova_compute[235132]: 2025-10-10 10:14:54.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:14:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:14:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:54.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:14:54 compute-1 nova_compute[235132]: 2025-10-10 10:14:54.106 2 INFO nova.compute.manager [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Took 0.91 seconds to destroy the instance on the hypervisor.
Oct 10 10:14:54 compute-1 nova_compute[235132]: 2025-10-10 10:14:54.107 2 DEBUG oslo.service.loopingcall [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 10 10:14:54 compute-1 nova_compute[235132]: 2025-10-10 10:14:54.107 2 DEBUG nova.compute.manager [-] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 10 10:14:54 compute-1 nova_compute[235132]: 2025-10-10 10:14:54.108 2 DEBUG nova.network.neutron [-] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 10 10:14:54 compute-1 ceph-mon[79167]: pgmap v868: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 168 KiB/s rd, 109 KiB/s wr, 58 op/s
Oct 10 10:14:54 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1407840113' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:14:54 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1389124777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:14:54 compute-1 nova_compute[235132]: 2025-10-10 10:14:54.158 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Oct 10 10:14:54 compute-1 nova_compute[235132]: 2025-10-10 10:14:54.158 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:14:54 compute-1 nova_compute[235132]: 2025-10-10 10:14:54.319 2 DEBUG nova.compute.manager [req-26505a5c-4f03-4356-afc0-119d6bd76b4f req-af5f41a3-6622-40b4-80b8-a93bc25f3325 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Received event network-vif-unplugged-446b0e59-d2be-42d8-801f-7ba63ba76e66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:14:54 compute-1 nova_compute[235132]: 2025-10-10 10:14:54.320 2 DEBUG oslo_concurrency.lockutils [req-26505a5c-4f03-4356-afc0-119d6bd76b4f req-af5f41a3-6622-40b4-80b8-a93bc25f3325 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:14:54 compute-1 nova_compute[235132]: 2025-10-10 10:14:54.320 2 DEBUG oslo_concurrency.lockutils [req-26505a5c-4f03-4356-afc0-119d6bd76b4f req-af5f41a3-6622-40b4-80b8-a93bc25f3325 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:14:54 compute-1 nova_compute[235132]: 2025-10-10 10:14:54.320 2 DEBUG oslo_concurrency.lockutils [req-26505a5c-4f03-4356-afc0-119d6bd76b4f req-af5f41a3-6622-40b4-80b8-a93bc25f3325 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:14:54 compute-1 nova_compute[235132]: 2025-10-10 10:14:54.321 2 DEBUG nova.compute.manager [req-26505a5c-4f03-4356-afc0-119d6bd76b4f req-af5f41a3-6622-40b4-80b8-a93bc25f3325 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] No waiting events found dispatching network-vif-unplugged-446b0e59-d2be-42d8-801f-7ba63ba76e66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:14:54 compute-1 nova_compute[235132]: 2025-10-10 10:14:54.321 2 DEBUG nova.compute.manager [req-26505a5c-4f03-4356-afc0-119d6bd76b4f req-af5f41a3-6622-40b4-80b8-a93bc25f3325 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Received event network-vif-unplugged-446b0e59-d2be-42d8-801f-7ba63ba76e66 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 10 10:14:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:54.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:54 compute-1 nova_compute[235132]: 2025-10-10 10:14:54.995 2 DEBUG nova.network.neutron [-] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:14:55 compute-1 nova_compute[235132]: 2025-10-10 10:14:55.013 2 INFO nova.compute.manager [-] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Took 0.91 seconds to deallocate network for instance.
Oct 10 10:14:55 compute-1 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 10:14:55 compute-1 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 11K writes, 41K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 3034 syncs, 3.72 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2207 writes, 6322 keys, 2207 commit groups, 1.0 writes per commit group, ingest: 6.08 MB, 0.01 MB/s
                                           Interval WAL: 2207 writes, 970 syncs, 2.28 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 10 10:14:55 compute-1 nova_compute[235132]: 2025-10-10 10:14:55.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:14:55 compute-1 nova_compute[235132]: 2025-10-10 10:14:55.063 2 DEBUG oslo_concurrency.lockutils [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:14:55 compute-1 nova_compute[235132]: 2025-10-10 10:14:55.064 2 DEBUG oslo_concurrency.lockutils [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:14:55 compute-1 nova_compute[235132]: 2025-10-10 10:14:55.117 2 DEBUG oslo_concurrency.processutils [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:14:55 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1987096323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:14:55 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:14:55 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4254524373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:14:55 compute-1 nova_compute[235132]: 2025-10-10 10:14:55.627 2 DEBUG oslo_concurrency.processutils [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:14:55 compute-1 nova_compute[235132]: 2025-10-10 10:14:55.635 2 DEBUG nova.compute.provider_tree [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:14:55 compute-1 nova_compute[235132]: 2025-10-10 10:14:55.663 2 DEBUG nova.scheduler.client.report [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:14:55 compute-1 nova_compute[235132]: 2025-10-10 10:14:55.722 2 DEBUG oslo_concurrency.lockutils [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:14:55 compute-1 nova_compute[235132]: 2025-10-10 10:14:55.750 2 INFO nova.scheduler.client.report [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Deleted allocations for instance 12298a8d-d383-47da-91e4-0a918e153f1d
Oct 10 10:14:55 compute-1 nova_compute[235132]: 2025-10-10 10:14:55.852 2 DEBUG oslo_concurrency.lockutils [None req-f19d582e-6ed8-415f-97d4-e8b95bd4b9b5 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "12298a8d-d383-47da-91e4-0a918e153f1d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:14:56 compute-1 nova_compute[235132]: 2025-10-10 10:14:56.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:14:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:56.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:56 compute-1 ceph-mon[79167]: pgmap v869: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 14 KiB/s wr, 29 op/s
Oct 10 10:14:56 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/773403696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:14:56 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/4254524373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:14:56 compute-1 nova_compute[235132]: 2025-10-10 10:14:56.440 2 DEBUG nova.compute.manager [req-b2f0d06e-0b6d-4858-970b-61bb4890dd59 req-1648d625-3879-43be-a2c5-d742296eed05 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Received event network-vif-plugged-446b0e59-d2be-42d8-801f-7ba63ba76e66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:14:56 compute-1 nova_compute[235132]: 2025-10-10 10:14:56.440 2 DEBUG oslo_concurrency.lockutils [req-b2f0d06e-0b6d-4858-970b-61bb4890dd59 req-1648d625-3879-43be-a2c5-d742296eed05 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:14:56 compute-1 nova_compute[235132]: 2025-10-10 10:14:56.441 2 DEBUG oslo_concurrency.lockutils [req-b2f0d06e-0b6d-4858-970b-61bb4890dd59 req-1648d625-3879-43be-a2c5-d742296eed05 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:14:56 compute-1 nova_compute[235132]: 2025-10-10 10:14:56.441 2 DEBUG oslo_concurrency.lockutils [req-b2f0d06e-0b6d-4858-970b-61bb4890dd59 req-1648d625-3879-43be-a2c5-d742296eed05 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "12298a8d-d383-47da-91e4-0a918e153f1d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:14:56 compute-1 nova_compute[235132]: 2025-10-10 10:14:56.441 2 DEBUG nova.compute.manager [req-b2f0d06e-0b6d-4858-970b-61bb4890dd59 req-1648d625-3879-43be-a2c5-d742296eed05 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] No waiting events found dispatching network-vif-plugged-446b0e59-d2be-42d8-801f-7ba63ba76e66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:14:56 compute-1 nova_compute[235132]: 2025-10-10 10:14:56.441 2 WARNING nova.compute.manager [req-b2f0d06e-0b6d-4858-970b-61bb4890dd59 req-1648d625-3879-43be-a2c5-d742296eed05 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Received unexpected event network-vif-plugged-446b0e59-d2be-42d8-801f-7ba63ba76e66 for instance with vm_state deleted and task_state None.
Oct 10 10:14:56 compute-1 nova_compute[235132]: 2025-10-10 10:14:56.442 2 DEBUG nova.compute.manager [req-b2f0d06e-0b6d-4858-970b-61bb4890dd59 req-1648d625-3879-43be-a2c5-d742296eed05 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Received event network-vif-deleted-446b0e59-d2be-42d8-801f-7ba63ba76e66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:14:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:56.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:56 compute-1 podman[242454]: 2025-10-10 10:14:56.968799161 +0000 UTC m=+0.064768667 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 10 10:14:57 compute-1 podman[242456]: 2025-10-10 10:14:57.004718836 +0000 UTC m=+0.088611881 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 10 10:14:57 compute-1 podman[242455]: 2025-10-10 10:14:57.012056636 +0000 UTC m=+0.095365395 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Oct 10 10:14:57 compute-1 nova_compute[235132]: 2025-10-10 10:14:57.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:14:57 compute-1 nova_compute[235132]: 2025-10-10 10:14:57.070 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:14:57 compute-1 nova_compute[235132]: 2025-10-10 10:14:57.070 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:14:57 compute-1 nova_compute[235132]: 2025-10-10 10:14:57.071 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:14:57 compute-1 nova_compute[235132]: 2025-10-10 10:14:57.071 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:14:57 compute-1 nova_compute[235132]: 2025-10-10 10:14:57.072 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:14:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:14:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:14:57 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/688430293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:14:57 compute-1 nova_compute[235132]: 2025-10-10 10:14:57.533 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:14:57 compute-1 nova_compute[235132]: 2025-10-10 10:14:57.773 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:14:57 compute-1 nova_compute[235132]: 2025-10-10 10:14:57.774 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4888MB free_disk=59.94269943237305GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:14:57 compute-1 nova_compute[235132]: 2025-10-10 10:14:57.774 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:14:57 compute-1 nova_compute[235132]: 2025-10-10 10:14:57.775 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:14:57 compute-1 nova_compute[235132]: 2025-10-10 10:14:57.856 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:14:57 compute-1 nova_compute[235132]: 2025-10-10 10:14:57.856 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:14:57 compute-1 nova_compute[235132]: 2025-10-10 10:14:57.876 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:14:57 compute-1 nova_compute[235132]: 2025-10-10 10:14:57.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:14:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:14:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:14:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:14:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:14:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:14:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:14:58.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:14:58 compute-1 ceph-mon[79167]: pgmap v870: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 14 KiB/s wr, 29 op/s
Oct 10 10:14:58 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/688430293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:14:58 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:14:58 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2213807489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:14:58 compute-1 nova_compute[235132]: 2025-10-10 10:14:58.323 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:14:58 compute-1 nova_compute[235132]: 2025-10-10 10:14:58.331 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:14:58 compute-1 nova_compute[235132]: 2025-10-10 10:14:58.349 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:14:58 compute-1 nova_compute[235132]: 2025-10-10 10:14:58.380 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:14:58 compute-1 nova_compute[235132]: 2025-10-10 10:14:58.381 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:14:58 compute-1 nova_compute[235132]: 2025-10-10 10:14:58.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:14:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:14:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:14:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:14:58.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:14:58 compute-1 sudo[242564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:14:58 compute-1 sudo[242564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:14:58 compute-1 sudo[242564]: pam_unix(sudo:session): session closed for user root
Oct 10 10:14:59 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2213807489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:14:59 compute-1 nova_compute[235132]: 2025-10-10 10:14:59.383 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:15:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:15:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:00.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:15:00 compute-1 ceph-mon[79167]: pgmap v871: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 15 KiB/s wr, 58 op/s
Oct 10 10:15:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:15:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:00.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:15:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:02.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:02 compute-1 ceph-mon[79167]: pgmap v872: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Oct 10 10:15:02 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:15:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:15:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:02.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:02 compute-1 nova_compute[235132]: 2025-10-10 10:15:02.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:15:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:15:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:15:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:15:03 compute-1 nova_compute[235132]: 2025-10-10 10:15:03.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:03 compute-1 nova_compute[235132]: 2025-10-10 10:15:03.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:03 compute-1 nova_compute[235132]: 2025-10-10 10:15:03.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:04.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:04 compute-1 ceph-mon[79167]: pgmap v873: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Oct 10 10:15:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:04.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:15:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:06.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:15:06 compute-1 ceph-mon[79167]: pgmap v874: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 10 10:15:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:06.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:15:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:15:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:15:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:15:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:15:08 compute-1 nova_compute[235132]: 2025-10-10 10:15:08.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:08.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:08 compute-1 ceph-mon[79167]: pgmap v875: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 10 10:15:08 compute-1 nova_compute[235132]: 2025-10-10 10:15:08.432 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760091293.430017, 12298a8d-d383-47da-91e4-0a918e153f1d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:15:08 compute-1 nova_compute[235132]: 2025-10-10 10:15:08.432 2 INFO nova.compute.manager [-] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] VM Stopped (Lifecycle Event)
Oct 10 10:15:08 compute-1 nova_compute[235132]: 2025-10-10 10:15:08.458 2 DEBUG nova.compute.manager [None req-3b3f819e-d943-4e1b-87fa-e67f3245c99f - - - - - -] [instance: 12298a8d-d383-47da-91e4-0a918e153f1d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:15:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:15:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:08.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:15:08 compute-1 nova_compute[235132]: 2025-10-10 10:15:08.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:10.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:10 compute-1 ceph-mon[79167]: pgmap v876: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Oct 10 10:15:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:10.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:12.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:12 compute-1 ceph-mon[79167]: pgmap v877: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:15:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:15:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:12.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:15:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:15:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:15:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:15:13 compute-1 nova_compute[235132]: 2025-10-10 10:15:13.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:13 compute-1 nova_compute[235132]: 2025-10-10 10:15:13.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:13 compute-1 podman[242598]: 2025-10-10 10:15:13.98358317 +0000 UTC m=+0.081830465 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:15:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:14.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:14 compute-1 ceph-mon[79167]: pgmap v878: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:15:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:14.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:15 compute-1 ceph-mon[79167]: pgmap v879: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:15:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:16.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:15:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:16.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:17 compute-1 ceph-mon[79167]: pgmap v880: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:15:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:15:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:15:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:15:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:15:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:15:18 compute-1 nova_compute[235132]: 2025-10-10 10:15:18.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:18.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:18.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:18 compute-1 nova_compute[235132]: 2025-10-10 10:15:18.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:18 compute-1 sudo[242620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:15:18 compute-1 sudo[242620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:15:18 compute-1 sudo[242620]: pam_unix(sudo:session): session closed for user root
Oct 10 10:15:19 compute-1 nova_compute[235132]: 2025-10-10 10:15:19.190 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:15:19 compute-1 nova_compute[235132]: 2025-10-10 10:15:19.190 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:15:19 compute-1 nova_compute[235132]: 2025-10-10 10:15:19.208 2 DEBUG nova.compute.manager [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 10 10:15:19 compute-1 nova_compute[235132]: 2025-10-10 10:15:19.296 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:15:19 compute-1 nova_compute[235132]: 2025-10-10 10:15:19.297 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:15:19 compute-1 nova_compute[235132]: 2025-10-10 10:15:19.305 2 DEBUG nova.virt.hardware [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 10 10:15:19 compute-1 nova_compute[235132]: 2025-10-10 10:15:19.305 2 INFO nova.compute.claims [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Claim successful on node compute-1.ctlplane.example.com
Oct 10 10:15:19 compute-1 nova_compute[235132]: 2025-10-10 10:15:19.427 2 DEBUG oslo_concurrency.processutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:15:19 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:15:19 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3825464269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:15:19 compute-1 nova_compute[235132]: 2025-10-10 10:15:19.868 2 DEBUG oslo_concurrency.processutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:15:19 compute-1 nova_compute[235132]: 2025-10-10 10:15:19.878 2 DEBUG nova.compute.provider_tree [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:15:19 compute-1 nova_compute[235132]: 2025-10-10 10:15:19.907 2 DEBUG nova.scheduler.client.report [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:15:19 compute-1 nova_compute[235132]: 2025-10-10 10:15:19.946 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:15:19 compute-1 nova_compute[235132]: 2025-10-10 10:15:19.947 2 DEBUG nova.compute.manager [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 10 10:15:19 compute-1 nova_compute[235132]: 2025-10-10 10:15:19.988 2 DEBUG nova.compute.manager [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 10 10:15:19 compute-1 nova_compute[235132]: 2025-10-10 10:15:19.989 2 DEBUG nova.network.neutron [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 10 10:15:20 compute-1 nova_compute[235132]: 2025-10-10 10:15:20.009 2 INFO nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 10 10:15:20 compute-1 nova_compute[235132]: 2025-10-10 10:15:20.029 2 DEBUG nova.compute.manager [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 10 10:15:20 compute-1 ceph-mon[79167]: pgmap v881: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:15:20 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3825464269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:15:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:20.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:20 compute-1 nova_compute[235132]: 2025-10-10 10:15:20.124 2 DEBUG nova.compute.manager [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 10 10:15:20 compute-1 nova_compute[235132]: 2025-10-10 10:15:20.126 2 DEBUG nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 10 10:15:20 compute-1 nova_compute[235132]: 2025-10-10 10:15:20.126 2 INFO nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Creating image(s)
Oct 10 10:15:20 compute-1 nova_compute[235132]: 2025-10-10 10:15:20.158 2 DEBUG nova.storage.rbd_utils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:15:20 compute-1 nova_compute[235132]: 2025-10-10 10:15:20.184 2 DEBUG nova.storage.rbd_utils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:15:20 compute-1 nova_compute[235132]: 2025-10-10 10:15:20.210 2 DEBUG nova.storage.rbd_utils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:15:20 compute-1 nova_compute[235132]: 2025-10-10 10:15:20.213 2 DEBUG oslo_concurrency.processutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:15:20 compute-1 nova_compute[235132]: 2025-10-10 10:15:20.266 2 DEBUG oslo_concurrency.processutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:15:20 compute-1 nova_compute[235132]: 2025-10-10 10:15:20.267 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:15:20 compute-1 nova_compute[235132]: 2025-10-10 10:15:20.267 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:15:20 compute-1 nova_compute[235132]: 2025-10-10 10:15:20.267 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:15:20 compute-1 nova_compute[235132]: 2025-10-10 10:15:20.294 2 DEBUG nova.storage.rbd_utils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:15:20 compute-1 nova_compute[235132]: 2025-10-10 10:15:20.297 2 DEBUG oslo_concurrency.processutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:15:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:20.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:20 compute-1 nova_compute[235132]: 2025-10-10 10:15:20.602 2 DEBUG oslo_concurrency.processutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:15:20 compute-1 nova_compute[235132]: 2025-10-10 10:15:20.684 2 DEBUG nova.storage.rbd_utils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] resizing rbd image bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 10 10:15:20 compute-1 nova_compute[235132]: 2025-10-10 10:15:20.796 2 DEBUG nova.objects.instance [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'migration_context' on Instance uuid bd82d620-e0e5-4fb1-b8a5-973cefbcd107 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:15:20 compute-1 nova_compute[235132]: 2025-10-10 10:15:20.819 2 DEBUG nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 10 10:15:20 compute-1 nova_compute[235132]: 2025-10-10 10:15:20.820 2 DEBUG nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Ensure instance console log exists: /var/lib/nova/instances/bd82d620-e0e5-4fb1-b8a5-973cefbcd107/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 10 10:15:20 compute-1 nova_compute[235132]: 2025-10-10 10:15:20.821 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:15:20 compute-1 nova_compute[235132]: 2025-10-10 10:15:20.821 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:15:20 compute-1 nova_compute[235132]: 2025-10-10 10:15:20.822 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:15:20 compute-1 nova_compute[235132]: 2025-10-10 10:15:20.832 2 DEBUG nova.policy [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7956778c03764aaf8906c9b435337976', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 10 10:15:22 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 10:15:22 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 5385 writes, 28K keys, 5385 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.04 MB/s
                                           Cumulative WAL: 5384 writes, 5384 syncs, 1.00 writes per sync, written: 0.07 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1518 writes, 7361 keys, 1518 commit groups, 1.0 writes per commit group, ingest: 16.90 MB, 0.03 MB/s
                                           Interval WAL: 1517 writes, 1517 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    145.5      0.30              0.15        15    0.020       0      0       0.0       0.0
                                             L6      1/0   13.66 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.1    212.4    182.1      0.97              0.55        14    0.069     73K   7379       0.0       0.0
                                            Sum      1/0   13.66 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.1    162.7    173.5      1.27              0.71        29    0.044     73K   7379       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.9    187.1    190.6      0.40              0.23        10    0.040     30K   2558       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    212.4    182.1      0.97              0.55        14    0.069     73K   7379       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    146.7      0.29              0.15        14    0.021       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.042, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.21 GB write, 0.12 MB/s write, 0.20 GB read, 0.11 MB/s read, 1.3 seconds
                                           Interval compaction: 0.07 GB write, 0.13 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5625d3e63350#2 capacity: 304.00 MB usage: 17.45 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000125 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(932,16.87 MB,5.54813%) FilterBlock(29,219.23 KB,0.0704263%) IndexBlock(29,378.61 KB,0.121624%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 10 10:15:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:22.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:22 compute-1 ceph-mon[79167]: pgmap v882: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:15:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:15:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:15:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:22.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:15:22 compute-1 nova_compute[235132]: 2025-10-10 10:15:22.991 2 DEBUG nova.network.neutron [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Successfully created port: 562e8418-d47e-4fd1-8a23-094e0ce40097 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 10 10:15:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:15:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:15:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:15:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:15:23 compute-1 nova_compute[235132]: 2025-10-10 10:15:23.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:23 compute-1 nova_compute[235132]: 2025-10-10 10:15:23.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:23 compute-1 nova_compute[235132]: 2025-10-10 10:15:23.727 2 DEBUG nova.network.neutron [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Successfully updated port: 562e8418-d47e-4fd1-8a23-094e0ce40097 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 10 10:15:23 compute-1 nova_compute[235132]: 2025-10-10 10:15:23.746 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:15:23 compute-1 nova_compute[235132]: 2025-10-10 10:15:23.746 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquired lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:15:23 compute-1 nova_compute[235132]: 2025-10-10 10:15:23.747 2 DEBUG nova.network.neutron [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 10 10:15:23 compute-1 nova_compute[235132]: 2025-10-10 10:15:23.829 2 DEBUG nova.compute.manager [req-75f17aa8-a779-4c7a-b455-f3d2f72b2360 req-e7cd01d6-6d9a-4af4-9e97-3d5d04854128 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received event network-changed-562e8418-d47e-4fd1-8a23-094e0ce40097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:15:23 compute-1 nova_compute[235132]: 2025-10-10 10:15:23.830 2 DEBUG nova.compute.manager [req-75f17aa8-a779-4c7a-b455-f3d2f72b2360 req-e7cd01d6-6d9a-4af4-9e97-3d5d04854128 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Refreshing instance network info cache due to event network-changed-562e8418-d47e-4fd1-8a23-094e0ce40097. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 10 10:15:23 compute-1 nova_compute[235132]: 2025-10-10 10:15:23.831 2 DEBUG oslo_concurrency.lockutils [req-75f17aa8-a779-4c7a-b455-f3d2f72b2360 req-e7cd01d6-6d9a-4af4-9e97-3d5d04854128 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:15:23 compute-1 nova_compute[235132]: 2025-10-10 10:15:23.908 2 DEBUG nova.network.neutron [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 10 10:15:24 compute-1 ceph-mon[79167]: pgmap v883: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:15:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:15:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:24.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:15:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:24.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:25 compute-1 nova_compute[235132]: 2025-10-10 10:15:25.047 2 DEBUG nova.network.neutron [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updating instance_info_cache with network_info: [{"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:15:25 compute-1 nova_compute[235132]: 2025-10-10 10:15:25.076 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Releasing lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:15:25 compute-1 nova_compute[235132]: 2025-10-10 10:15:25.077 2 DEBUG nova.compute.manager [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Instance network_info: |[{"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 10 10:15:25 compute-1 nova_compute[235132]: 2025-10-10 10:15:25.077 2 DEBUG oslo_concurrency.lockutils [req-75f17aa8-a779-4c7a-b455-f3d2f72b2360 req-e7cd01d6-6d9a-4af4-9e97-3d5d04854128 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:15:25 compute-1 nova_compute[235132]: 2025-10-10 10:15:25.077 2 DEBUG nova.network.neutron [req-75f17aa8-a779-4c7a-b455-f3d2f72b2360 req-e7cd01d6-6d9a-4af4-9e97-3d5d04854128 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Refreshing network info cache for port 562e8418-d47e-4fd1-8a23-094e0ce40097 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 10 10:15:25 compute-1 nova_compute[235132]: 2025-10-10 10:15:25.081 2 DEBUG nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Start _get_guest_xml network_info=[{"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-10T10:09:50Z,direct_url=<?>,disk_format='qcow2',id=5ae78700-970d-45b4-a57d-978a054c7519,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ec962e275689437d80680ff3ea69c852',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-10T10:09:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'boot_index': 0, 'guest_format': None, 'image_id': '5ae78700-970d-45b4-a57d-978a054c7519'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 10 10:15:25 compute-1 nova_compute[235132]: 2025-10-10 10:15:25.085 2 WARNING nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:15:25 compute-1 nova_compute[235132]: 2025-10-10 10:15:25.091 2 DEBUG nova.virt.libvirt.host [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 10 10:15:25 compute-1 nova_compute[235132]: 2025-10-10 10:15:25.092 2 DEBUG nova.virt.libvirt.host [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 10 10:15:25 compute-1 nova_compute[235132]: 2025-10-10 10:15:25.100 2 DEBUG nova.virt.libvirt.host [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 10 10:15:25 compute-1 nova_compute[235132]: 2025-10-10 10:15:25.100 2 DEBUG nova.virt.libvirt.host [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 10 10:15:25 compute-1 nova_compute[235132]: 2025-10-10 10:15:25.101 2 DEBUG nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 10 10:15:25 compute-1 nova_compute[235132]: 2025-10-10 10:15:25.102 2 DEBUG nova.virt.hardware [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-10T10:09:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='00373e71-6208-4238-ad85-db0452c53bc6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-10T10:09:50Z,direct_url=<?>,disk_format='qcow2',id=5ae78700-970d-45b4-a57d-978a054c7519,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ec962e275689437d80680ff3ea69c852',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-10T10:09:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 10 10:15:25 compute-1 nova_compute[235132]: 2025-10-10 10:15:25.103 2 DEBUG nova.virt.hardware [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 10 10:15:25 compute-1 nova_compute[235132]: 2025-10-10 10:15:25.103 2 DEBUG nova.virt.hardware [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 10 10:15:25 compute-1 nova_compute[235132]: 2025-10-10 10:15:25.104 2 DEBUG nova.virt.hardware [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 10 10:15:25 compute-1 nova_compute[235132]: 2025-10-10 10:15:25.104 2 DEBUG nova.virt.hardware [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 10 10:15:25 compute-1 nova_compute[235132]: 2025-10-10 10:15:25.104 2 DEBUG nova.virt.hardware [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 10 10:15:25 compute-1 nova_compute[235132]: 2025-10-10 10:15:25.105 2 DEBUG nova.virt.hardware [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 10 10:15:25 compute-1 nova_compute[235132]: 2025-10-10 10:15:25.105 2 DEBUG nova.virt.hardware [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 10 10:15:25 compute-1 nova_compute[235132]: 2025-10-10 10:15:25.106 2 DEBUG nova.virt.hardware [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 10 10:15:25 compute-1 nova_compute[235132]: 2025-10-10 10:15:25.106 2 DEBUG nova.virt.hardware [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 10 10:15:25 compute-1 nova_compute[235132]: 2025-10-10 10:15:25.107 2 DEBUG nova.virt.hardware [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 10 10:15:25 compute-1 nova_compute[235132]: 2025-10-10 10:15:25.112 2 DEBUG oslo_concurrency.processutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:15:25 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 10 10:15:25 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3004183643' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:15:25 compute-1 nova_compute[235132]: 2025-10-10 10:15:25.574 2 DEBUG oslo_concurrency.processutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:15:25 compute-1 nova_compute[235132]: 2025-10-10 10:15:25.608 2 DEBUG nova.storage.rbd_utils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:15:25 compute-1 nova_compute[235132]: 2025-10-10 10:15:25.613 2 DEBUG oslo_concurrency.processutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:15:26 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 10 10:15:26 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3003324697' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:15:26 compute-1 nova_compute[235132]: 2025-10-10 10:15:26.054 2 DEBUG oslo_concurrency.processutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:15:26 compute-1 nova_compute[235132]: 2025-10-10 10:15:26.056 2 DEBUG nova.virt.libvirt.vif [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-10T10:15:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-217348562',display_name='tempest-TestNetworkBasicOps-server-217348562',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-217348562',id=6,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHLzdAHDdUURWXD0NwbnzkKciFKEQ2omFqKVpiZUQ/jQkwx0IlaJ48FUTUghTozEFkbgWKl3XHIfnAKs6ai2Am8DZErVGD6iO1tzsuGiO5n1KsYJdS5ZP3lMvRFTeABsRg==',key_name='tempest-TestNetworkBasicOps-1625432950',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-qt8amg0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-10T10:15:20Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=bd82d620-e0e5-4fb1-b8a5-973cefbcd107,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 10 10:15:26 compute-1 nova_compute[235132]: 2025-10-10 10:15:26.057 2 DEBUG nova.network.os_vif_util [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:15:26 compute-1 nova_compute[235132]: 2025-10-10 10:15:26.059 2 DEBUG nova.network.os_vif_util [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:fc:1f,bridge_name='br-int',has_traffic_filtering=True,id=562e8418-d47e-4fd1-8a23-094e0ce40097,network=Network(ebfb122d-a6ca-4257-952a-e1a888448e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap562e8418-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:15:26 compute-1 nova_compute[235132]: 2025-10-10 10:15:26.061 2 DEBUG nova.objects.instance [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'pci_devices' on Instance uuid bd82d620-e0e5-4fb1-b8a5-973cefbcd107 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:15:26 compute-1 nova_compute[235132]: 2025-10-10 10:15:26.085 2 DEBUG nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] End _get_guest_xml xml=<domain type="kvm">
Oct 10 10:15:26 compute-1 nova_compute[235132]:   <uuid>bd82d620-e0e5-4fb1-b8a5-973cefbcd107</uuid>
Oct 10 10:15:26 compute-1 nova_compute[235132]:   <name>instance-00000006</name>
Oct 10 10:15:26 compute-1 nova_compute[235132]:   <memory>131072</memory>
Oct 10 10:15:26 compute-1 nova_compute[235132]:   <vcpu>1</vcpu>
Oct 10 10:15:26 compute-1 nova_compute[235132]:   <metadata>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 10:15:26 compute-1 nova_compute[235132]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:       <nova:name>tempest-TestNetworkBasicOps-server-217348562</nova:name>
Oct 10 10:15:26 compute-1 nova_compute[235132]:       <nova:creationTime>2025-10-10 10:15:25</nova:creationTime>
Oct 10 10:15:26 compute-1 nova_compute[235132]:       <nova:flavor name="m1.nano">
Oct 10 10:15:26 compute-1 nova_compute[235132]:         <nova:memory>128</nova:memory>
Oct 10 10:15:26 compute-1 nova_compute[235132]:         <nova:disk>1</nova:disk>
Oct 10 10:15:26 compute-1 nova_compute[235132]:         <nova:swap>0</nova:swap>
Oct 10 10:15:26 compute-1 nova_compute[235132]:         <nova:ephemeral>0</nova:ephemeral>
Oct 10 10:15:26 compute-1 nova_compute[235132]:         <nova:vcpus>1</nova:vcpus>
Oct 10 10:15:26 compute-1 nova_compute[235132]:       </nova:flavor>
Oct 10 10:15:26 compute-1 nova_compute[235132]:       <nova:owner>
Oct 10 10:15:26 compute-1 nova_compute[235132]:         <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 10:15:26 compute-1 nova_compute[235132]:         <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 10:15:26 compute-1 nova_compute[235132]:       </nova:owner>
Oct 10 10:15:26 compute-1 nova_compute[235132]:       <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:       <nova:ports>
Oct 10 10:15:26 compute-1 nova_compute[235132]:         <nova:port uuid="562e8418-d47e-4fd1-8a23-094e0ce40097">
Oct 10 10:15:26 compute-1 nova_compute[235132]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:         </nova:port>
Oct 10 10:15:26 compute-1 nova_compute[235132]:       </nova:ports>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     </nova:instance>
Oct 10 10:15:26 compute-1 nova_compute[235132]:   </metadata>
Oct 10 10:15:26 compute-1 nova_compute[235132]:   <sysinfo type="smbios">
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <system>
Oct 10 10:15:26 compute-1 nova_compute[235132]:       <entry name="manufacturer">RDO</entry>
Oct 10 10:15:26 compute-1 nova_compute[235132]:       <entry name="product">OpenStack Compute</entry>
Oct 10 10:15:26 compute-1 nova_compute[235132]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 10:15:26 compute-1 nova_compute[235132]:       <entry name="serial">bd82d620-e0e5-4fb1-b8a5-973cefbcd107</entry>
Oct 10 10:15:26 compute-1 nova_compute[235132]:       <entry name="uuid">bd82d620-e0e5-4fb1-b8a5-973cefbcd107</entry>
Oct 10 10:15:26 compute-1 nova_compute[235132]:       <entry name="family">Virtual Machine</entry>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     </system>
Oct 10 10:15:26 compute-1 nova_compute[235132]:   </sysinfo>
Oct 10 10:15:26 compute-1 nova_compute[235132]:   <os>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <boot dev="hd"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <smbios mode="sysinfo"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:   </os>
Oct 10 10:15:26 compute-1 nova_compute[235132]:   <features>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <acpi/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <apic/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <vmcoreinfo/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:   </features>
Oct 10 10:15:26 compute-1 nova_compute[235132]:   <clock offset="utc">
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <timer name="pit" tickpolicy="delay"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <timer name="hpet" present="no"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:   </clock>
Oct 10 10:15:26 compute-1 nova_compute[235132]:   <cpu mode="host-model" match="exact">
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <topology sockets="1" cores="1" threads="1"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:   </cpu>
Oct 10 10:15:26 compute-1 nova_compute[235132]:   <devices>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <disk type="network" device="disk">
Oct 10 10:15:26 compute-1 nova_compute[235132]:       <driver type="raw" cache="none"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:       <source protocol="rbd" name="vms/bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk">
Oct 10 10:15:26 compute-1 nova_compute[235132]:         <host name="192.168.122.100" port="6789"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:         <host name="192.168.122.102" port="6789"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:         <host name="192.168.122.101" port="6789"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:       </source>
Oct 10 10:15:26 compute-1 nova_compute[235132]:       <auth username="openstack">
Oct 10 10:15:26 compute-1 nova_compute[235132]:         <secret type="ceph" uuid="21f084a3-af34-5230-afe4-ea5cd24a55f4"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:       </auth>
Oct 10 10:15:26 compute-1 nova_compute[235132]:       <target dev="vda" bus="virtio"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     </disk>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <disk type="network" device="cdrom">
Oct 10 10:15:26 compute-1 nova_compute[235132]:       <driver type="raw" cache="none"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:       <source protocol="rbd" name="vms/bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk.config">
Oct 10 10:15:26 compute-1 nova_compute[235132]:         <host name="192.168.122.100" port="6789"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:         <host name="192.168.122.102" port="6789"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:         <host name="192.168.122.101" port="6789"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:       </source>
Oct 10 10:15:26 compute-1 nova_compute[235132]:       <auth username="openstack">
Oct 10 10:15:26 compute-1 nova_compute[235132]:         <secret type="ceph" uuid="21f084a3-af34-5230-afe4-ea5cd24a55f4"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:       </auth>
Oct 10 10:15:26 compute-1 nova_compute[235132]:       <target dev="sda" bus="sata"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     </disk>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <interface type="ethernet">
Oct 10 10:15:26 compute-1 nova_compute[235132]:       <mac address="fa:16:3e:73:fc:1f"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:       <model type="virtio"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:       <driver name="vhost" rx_queue_size="512"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:       <mtu size="1442"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:       <target dev="tap562e8418-d4"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     </interface>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <serial type="pty">
Oct 10 10:15:26 compute-1 nova_compute[235132]:       <log file="/var/lib/nova/instances/bd82d620-e0e5-4fb1-b8a5-973cefbcd107/console.log" append="off"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     </serial>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <video>
Oct 10 10:15:26 compute-1 nova_compute[235132]:       <model type="virtio"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     </video>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <input type="tablet" bus="usb"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <rng model="virtio">
Oct 10 10:15:26 compute-1 nova_compute[235132]:       <backend model="random">/dev/urandom</backend>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     </rng>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <controller type="usb" index="0"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     <memballoon model="virtio">
Oct 10 10:15:26 compute-1 nova_compute[235132]:       <stats period="10"/>
Oct 10 10:15:26 compute-1 nova_compute[235132]:     </memballoon>
Oct 10 10:15:26 compute-1 nova_compute[235132]:   </devices>
Oct 10 10:15:26 compute-1 nova_compute[235132]: </domain>
Oct 10 10:15:26 compute-1 nova_compute[235132]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 10 10:15:26 compute-1 nova_compute[235132]: 2025-10-10 10:15:26.087 2 DEBUG nova.compute.manager [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Preparing to wait for external event network-vif-plugged-562e8418-d47e-4fd1-8a23-094e0ce40097 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 10 10:15:26 compute-1 nova_compute[235132]: 2025-10-10 10:15:26.088 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:15:26 compute-1 nova_compute[235132]: 2025-10-10 10:15:26.088 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:15:26 compute-1 nova_compute[235132]: 2025-10-10 10:15:26.088 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:15:26 compute-1 ceph-mon[79167]: pgmap v884: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:15:26 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3004183643' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:15:26 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3003324697' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:15:26 compute-1 nova_compute[235132]: 2025-10-10 10:15:26.090 2 DEBUG nova.virt.libvirt.vif [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-10T10:15:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-217348562',display_name='tempest-TestNetworkBasicOps-server-217348562',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-217348562',id=6,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHLzdAHDdUURWXD0NwbnzkKciFKEQ2omFqKVpiZUQ/jQkwx0IlaJ48FUTUghTozEFkbgWKl3XHIfnAKs6ai2Am8DZErVGD6iO1tzsuGiO5n1KsYJdS5ZP3lMvRFTeABsRg==',key_name='tempest-TestNetworkBasicOps-1625432950',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-qt8amg0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-10T10:15:20Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=bd82d620-e0e5-4fb1-b8a5-973cefbcd107,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 10 10:15:26 compute-1 nova_compute[235132]: 2025-10-10 10:15:26.090 2 DEBUG nova.network.os_vif_util [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:15:26 compute-1 nova_compute[235132]: 2025-10-10 10:15:26.091 2 DEBUG nova.network.os_vif_util [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:73:fc:1f,bridge_name='br-int',has_traffic_filtering=True,id=562e8418-d47e-4fd1-8a23-094e0ce40097,network=Network(ebfb122d-a6ca-4257-952a-e1a888448e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap562e8418-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:15:26 compute-1 nova_compute[235132]: 2025-10-10 10:15:26.092 2 DEBUG os_vif [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:fc:1f,bridge_name='br-int',has_traffic_filtering=True,id=562e8418-d47e-4fd1-8a23-094e0ce40097,network=Network(ebfb122d-a6ca-4257-952a-e1a888448e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap562e8418-d4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 10 10:15:26 compute-1 nova_compute[235132]: 2025-10-10 10:15:26.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:26 compute-1 nova_compute[235132]: 2025-10-10 10:15:26.094 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:15:26 compute-1 nova_compute[235132]: 2025-10-10 10:15:26.094 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 10 10:15:26 compute-1 nova_compute[235132]: 2025-10-10 10:15:26.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:26 compute-1 nova_compute[235132]: 2025-10-10 10:15:26.099 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap562e8418-d4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:15:26 compute-1 nova_compute[235132]: 2025-10-10 10:15:26.099 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap562e8418-d4, col_values=(('external_ids', {'iface-id': '562e8418-d47e-4fd1-8a23-094e0ce40097', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:73:fc:1f', 'vm-uuid': 'bd82d620-e0e5-4fb1-b8a5-973cefbcd107'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:15:26 compute-1 nova_compute[235132]: 2025-10-10 10:15:26.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:26 compute-1 NetworkManager[44982]: <info>  [1760091326.1023] manager: (tap562e8418-d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Oct 10 10:15:26 compute-1 nova_compute[235132]: 2025-10-10 10:15:26.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 10 10:15:26 compute-1 nova_compute[235132]: 2025-10-10 10:15:26.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:26 compute-1 nova_compute[235132]: 2025-10-10 10:15:26.113 2 INFO os_vif [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:73:fc:1f,bridge_name='br-int',has_traffic_filtering=True,id=562e8418-d47e-4fd1-8a23-094e0ce40097,network=Network(ebfb122d-a6ca-4257-952a-e1a888448e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap562e8418-d4')
Oct 10 10:15:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:26.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:26 compute-1 nova_compute[235132]: 2025-10-10 10:15:26.175 2 DEBUG nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 10 10:15:26 compute-1 nova_compute[235132]: 2025-10-10 10:15:26.175 2 DEBUG nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 10 10:15:26 compute-1 nova_compute[235132]: 2025-10-10 10:15:26.176 2 DEBUG nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No VIF found with MAC fa:16:3e:73:fc:1f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 10 10:15:26 compute-1 nova_compute[235132]: 2025-10-10 10:15:26.176 2 INFO nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Using config drive
Oct 10 10:15:26 compute-1 nova_compute[235132]: 2025-10-10 10:15:26.207 2 DEBUG nova.storage.rbd_utils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:15:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:26.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:26 compute-1 nova_compute[235132]: 2025-10-10 10:15:26.882 2 INFO nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Creating config drive at /var/lib/nova/instances/bd82d620-e0e5-4fb1-b8a5-973cefbcd107/disk.config
Oct 10 10:15:26 compute-1 nova_compute[235132]: 2025-10-10 10:15:26.887 2 DEBUG oslo_concurrency.processutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bd82d620-e0e5-4fb1-b8a5-973cefbcd107/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkq3vdkm8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:15:27 compute-1 nova_compute[235132]: 2025-10-10 10:15:27.013 2 DEBUG oslo_concurrency.processutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bd82d620-e0e5-4fb1-b8a5-973cefbcd107/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkq3vdkm8" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:15:27 compute-1 nova_compute[235132]: 2025-10-10 10:15:27.044 2 DEBUG nova.storage.rbd_utils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:15:27 compute-1 nova_compute[235132]: 2025-10-10 10:15:27.049 2 DEBUG oslo_concurrency.processutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bd82d620-e0e5-4fb1-b8a5-973cefbcd107/disk.config bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:15:27 compute-1 nova_compute[235132]: 2025-10-10 10:15:27.077 2 DEBUG nova.network.neutron [req-75f17aa8-a779-4c7a-b455-f3d2f72b2360 req-e7cd01d6-6d9a-4af4-9e97-3d5d04854128 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updated VIF entry in instance network info cache for port 562e8418-d47e-4fd1-8a23-094e0ce40097. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 10 10:15:27 compute-1 nova_compute[235132]: 2025-10-10 10:15:27.079 2 DEBUG nova.network.neutron [req-75f17aa8-a779-4c7a-b455-f3d2f72b2360 req-e7cd01d6-6d9a-4af4-9e97-3d5d04854128 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updating instance_info_cache with network_info: [{"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:15:27 compute-1 nova_compute[235132]: 2025-10-10 10:15:27.098 2 DEBUG oslo_concurrency.lockutils [req-75f17aa8-a779-4c7a-b455-f3d2f72b2360 req-e7cd01d6-6d9a-4af4-9e97-3d5d04854128 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:15:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/3262215243' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:15:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/3262215243' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:15:27 compute-1 nova_compute[235132]: 2025-10-10 10:15:27.228 2 DEBUG oslo_concurrency.processutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bd82d620-e0e5-4fb1-b8a5-973cefbcd107/disk.config bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:15:27 compute-1 nova_compute[235132]: 2025-10-10 10:15:27.229 2 INFO nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Deleting local config drive /var/lib/nova/instances/bd82d620-e0e5-4fb1-b8a5-973cefbcd107/disk.config because it was imported into RBD.
Oct 10 10:15:27 compute-1 kernel: tap562e8418-d4: entered promiscuous mode
Oct 10 10:15:27 compute-1 ovn_controller[131749]: 2025-10-10T10:15:27Z|00067|binding|INFO|Claiming lport 562e8418-d47e-4fd1-8a23-094e0ce40097 for this chassis.
Oct 10 10:15:27 compute-1 ovn_controller[131749]: 2025-10-10T10:15:27Z|00068|binding|INFO|562e8418-d47e-4fd1-8a23-094e0ce40097: Claiming fa:16:3e:73:fc:1f 10.100.0.12
Oct 10 10:15:27 compute-1 nova_compute[235132]: 2025-10-10 10:15:27.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:27 compute-1 NetworkManager[44982]: <info>  [1760091327.3119] manager: (tap562e8418-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Oct 10 10:15:27 compute-1 nova_compute[235132]: 2025-10-10 10:15:27.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:27 compute-1 nova_compute[235132]: 2025-10-10 10:15:27.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.334 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:fc:1f 10.100.0.12'], port_security=['fa:16:3e:73:fc:1f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'bd82d620-e0e5-4fb1-b8a5-973cefbcd107', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebfb122d-a6ca-4257-952a-e1a888448e1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5f14a6f9-41f9-49f8-b407-62ca2cdc0259', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46d717de-5083-46ba-b06e-f3ccc6cb202a, chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], logical_port=562e8418-d47e-4fd1-8a23-094e0ce40097) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.335 141156 INFO neutron.agent.ovn.metadata.agent [-] Port 562e8418-d47e-4fd1-8a23-094e0ce40097 in datapath ebfb122d-a6ca-4257-952a-e1a888448e1c bound to our chassis
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.335 141156 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ebfb122d-a6ca-4257-952a-e1a888448e1c
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.350 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[9b0c7f0a-64cc-4888-b415-7e4a51d422ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.351 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapebfb122d-a1 in ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.354 238898 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapebfb122d-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.354 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[66e6985a-f200-450e-b963-27baf6ffe40e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.355 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[9119c73f-d432-40e4-afb9-96f7dae9b0f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:15:27 compute-1 systemd-machined[191637]: New machine qemu-4-instance-00000006.
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.379 141275 DEBUG oslo.privsep.daemon [-] privsep: reply[47a015be-72bc-47f8-b687-db3d30d6bf0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:15:27 compute-1 systemd[1]: Started Virtual Machine qemu-4-instance-00000006.
Oct 10 10:15:27 compute-1 systemd-udevd[243002]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 10:15:27 compute-1 nova_compute[235132]: 2025-10-10 10:15:27.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:27 compute-1 ovn_controller[131749]: 2025-10-10T10:15:27Z|00069|binding|INFO|Setting lport 562e8418-d47e-4fd1-8a23-094e0ce40097 ovn-installed in OVS
Oct 10 10:15:27 compute-1 ovn_controller[131749]: 2025-10-10T10:15:27Z|00070|binding|INFO|Setting lport 562e8418-d47e-4fd1-8a23-094e0ce40097 up in Southbound
Oct 10 10:15:27 compute-1 nova_compute[235132]: 2025-10-10 10:15:27.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.409 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[ea1f5903-82f6-41e6-a0c9-f3bf56b95ffd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:15:27 compute-1 NetworkManager[44982]: <info>  [1760091327.4223] device (tap562e8418-d4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 10:15:27 compute-1 NetworkManager[44982]: <info>  [1760091327.4231] device (tap562e8418-d4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 10:15:27 compute-1 podman[242971]: 2025-10-10 10:15:27.441690106 +0000 UTC m=+0.079841890 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.443 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[c9ec2ef3-c900-42e5-825e-8583c5dce16b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.448 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[c9a99448-38d8-4341-8593-3ea076f381a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:15:27 compute-1 NetworkManager[44982]: <info>  [1760091327.4493] manager: (tapebfb122d-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/51)
Oct 10 10:15:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:15:27 compute-1 systemd-udevd[243018]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 10:15:27 compute-1 podman[242970]: 2025-10-10 10:15:27.461222421 +0000 UTC m=+0.110402658 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.479 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[4335649e-524f-40e8-8a4c-7747062c3d62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.482 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[cdfcf517-9f3c-4342-bdf0-ea3f11470b07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:15:27 compute-1 podman[242973]: 2025-10-10 10:15:27.498247336 +0000 UTC m=+0.134587630 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 10 10:15:27 compute-1 NetworkManager[44982]: <info>  [1760091327.5028] device (tapebfb122d-a0): carrier: link connected
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.506 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[5f3efe5f-7052-41f4-9d5a-92887fd0d804]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.521 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[120fba4a-9b35-422e-b3c4-5cc5d48a5815]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebfb122d-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:64:51'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 424121, 'reachable_time': 37562, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243068, 'error': None, 'target': 'ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.534 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[55345305-584a-440b-8c9a-809cba5e6c6f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe09:6451'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 424121, 'tstamp': 424121}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243069, 'error': None, 'target': 'ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.549 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[1299e3d2-6a98-48f6-bffe-b3a727aa14ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebfb122d-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:64:51'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 424121, 'reachable_time': 37562, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 243070, 'error': None, 'target': 'ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.571 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[47f8afbd-8871-4ba5-914e-26892ff430c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:15:27 compute-1 nova_compute[235132]: 2025-10-10 10:15:27.605 2 DEBUG nova.compute.manager [req-5050bd20-9fe3-4785-a14b-df761d8fdd8d req-1947aeed-775d-4562-804b-4bfa0244c286 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received event network-vif-plugged-562e8418-d47e-4fd1-8a23-094e0ce40097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:15:27 compute-1 nova_compute[235132]: 2025-10-10 10:15:27.606 2 DEBUG oslo_concurrency.lockutils [req-5050bd20-9fe3-4785-a14b-df761d8fdd8d req-1947aeed-775d-4562-804b-4bfa0244c286 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:15:27 compute-1 nova_compute[235132]: 2025-10-10 10:15:27.606 2 DEBUG oslo_concurrency.lockutils [req-5050bd20-9fe3-4785-a14b-df761d8fdd8d req-1947aeed-775d-4562-804b-4bfa0244c286 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:15:27 compute-1 nova_compute[235132]: 2025-10-10 10:15:27.606 2 DEBUG oslo_concurrency.lockutils [req-5050bd20-9fe3-4785-a14b-df761d8fdd8d req-1947aeed-775d-4562-804b-4bfa0244c286 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:15:27 compute-1 nova_compute[235132]: 2025-10-10 10:15:27.606 2 DEBUG nova.compute.manager [req-5050bd20-9fe3-4785-a14b-df761d8fdd8d req-1947aeed-775d-4562-804b-4bfa0244c286 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Processing event network-vif-plugged-562e8418-d47e-4fd1-8a23-094e0ce40097 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.628 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[6a91a37b-e509-4b9a-91ac-69b31d313215]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.630 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebfb122d-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.630 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.631 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebfb122d-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:15:27 compute-1 nova_compute[235132]: 2025-10-10 10:15:27.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:27 compute-1 NetworkManager[44982]: <info>  [1760091327.6350] manager: (tapebfb122d-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Oct 10 10:15:27 compute-1 kernel: tapebfb122d-a0: entered promiscuous mode
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.646 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapebfb122d-a0, col_values=(('external_ids', {'iface-id': '318e6d8e-f58f-407d-854f-d27adc402b34'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:15:27 compute-1 nova_compute[235132]: 2025-10-10 10:15:27.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:27 compute-1 ovn_controller[131749]: 2025-10-10T10:15:27Z|00071|binding|INFO|Releasing lport 318e6d8e-f58f-407d-854f-d27adc402b34 from this chassis (sb_readonly=0)
Oct 10 10:15:27 compute-1 nova_compute[235132]: 2025-10-10 10:15:27.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.677 141156 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ebfb122d-a6ca-4257-952a-e1a888448e1c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ebfb122d-a6ca-4257-952a-e1a888448e1c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.678 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[8b9f2ed1-b5b5-4eb4-b592-fd1b89f808b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.679 141156 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]: global
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]:     log         /dev/log local0 debug
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]:     log-tag     haproxy-metadata-proxy-ebfb122d-a6ca-4257-952a-e1a888448e1c
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]:     user        root
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]:     group       root
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]:     maxconn     1024
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]:     pidfile     /var/lib/neutron/external/pids/ebfb122d-a6ca-4257-952a-e1a888448e1c.pid.haproxy
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]:     daemon
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]: 
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]: defaults
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]:     log global
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]:     mode http
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]:     option httplog
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]:     option dontlognull
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]:     option http-server-close
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]:     option forwardfor
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]:     retries                 3
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]:     timeout http-request    30s
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]:     timeout connect         30s
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]:     timeout client          32s
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]:     timeout server          32s
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]:     timeout http-keep-alive 30s
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]: 
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]: 
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]: listen listener
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]:     bind 169.254.169.254:80
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]:     server metadata /var/lib/neutron/metadata_proxy
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]:     http-request add-header X-OVN-Network-ID ebfb122d-a6ca-4257-952a-e1a888448e1c
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 10 10:15:27 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:27.680 141156 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c', 'env', 'PROCESS_TAG=haproxy-ebfb122d-a6ca-4257-952a-e1a888448e1c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ebfb122d-a6ca-4257-952a-e1a888448e1c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 10 10:15:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:15:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:15:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:15:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:15:28 compute-1 nova_compute[235132]: 2025-10-10 10:15:28.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:28 compute-1 ceph-mon[79167]: pgmap v885: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:15:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:28.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:28 compute-1 podman[243144]: 2025-10-10 10:15:28.154883857 +0000 UTC m=+0.083213022 container create 35ed349cd81e0090d1cc262ca291722ba38bb459c9f6b6d929daf427a5465c69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 10 10:15:28 compute-1 podman[243144]: 2025-10-10 10:15:28.1119374 +0000 UTC m=+0.040266575 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 10 10:15:28 compute-1 systemd[1]: Started libpod-conmon-35ed349cd81e0090d1cc262ca291722ba38bb459c9f6b6d929daf427a5465c69.scope.
Oct 10 10:15:28 compute-1 systemd[1]: Started libcrun container.
Oct 10 10:15:28 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b1835549180fe222ffd4c8fc7255dc61386526a292af096d5df92e7189879c8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 10:15:28 compute-1 podman[243144]: 2025-10-10 10:15:28.267964117 +0000 UTC m=+0.196293302 container init 35ed349cd81e0090d1cc262ca291722ba38bb459c9f6b6d929daf427a5465c69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:15:28 compute-1 podman[243144]: 2025-10-10 10:15:28.272683836 +0000 UTC m=+0.201012991 container start 35ed349cd81e0090d1cc262ca291722ba38bb459c9f6b6d929daf427a5465c69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 10 10:15:28 compute-1 neutron-haproxy-ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c[243159]: [NOTICE]   (243163) : New worker (243165) forked
Oct 10 10:15:28 compute-1 neutron-haproxy-ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c[243159]: [NOTICE]   (243163) : Loading success.
Oct 10 10:15:28 compute-1 nova_compute[235132]: 2025-10-10 10:15:28.523 2 DEBUG nova.compute.manager [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 10 10:15:28 compute-1 nova_compute[235132]: 2025-10-10 10:15:28.524 2 DEBUG nova.virt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Emitting event <LifecycleEvent: 1760091328.5224319, bd82d620-e0e5-4fb1-b8a5-973cefbcd107 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:15:28 compute-1 nova_compute[235132]: 2025-10-10 10:15:28.525 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] VM Started (Lifecycle Event)
Oct 10 10:15:28 compute-1 nova_compute[235132]: 2025-10-10 10:15:28.529 2 DEBUG nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 10 10:15:28 compute-1 nova_compute[235132]: 2025-10-10 10:15:28.534 2 INFO nova.virt.libvirt.driver [-] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Instance spawned successfully.
Oct 10 10:15:28 compute-1 nova_compute[235132]: 2025-10-10 10:15:28.535 2 DEBUG nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 10 10:15:28 compute-1 nova_compute[235132]: 2025-10-10 10:15:28.559 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:15:28 compute-1 nova_compute[235132]: 2025-10-10 10:15:28.567 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 10 10:15:28 compute-1 nova_compute[235132]: 2025-10-10 10:15:28.575 2 DEBUG nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:15:28 compute-1 nova_compute[235132]: 2025-10-10 10:15:28.576 2 DEBUG nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:15:28 compute-1 nova_compute[235132]: 2025-10-10 10:15:28.578 2 DEBUG nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:15:28 compute-1 nova_compute[235132]: 2025-10-10 10:15:28.578 2 DEBUG nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:15:28 compute-1 nova_compute[235132]: 2025-10-10 10:15:28.579 2 DEBUG nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:15:28 compute-1 nova_compute[235132]: 2025-10-10 10:15:28.580 2 DEBUG nova.virt.libvirt.driver [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:15:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:15:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:28.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:15:28 compute-1 nova_compute[235132]: 2025-10-10 10:15:28.589 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 10 10:15:28 compute-1 nova_compute[235132]: 2025-10-10 10:15:28.589 2 DEBUG nova.virt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Emitting event <LifecycleEvent: 1760091328.5238392, bd82d620-e0e5-4fb1-b8a5-973cefbcd107 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:15:28 compute-1 nova_compute[235132]: 2025-10-10 10:15:28.589 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] VM Paused (Lifecycle Event)
Oct 10 10:15:28 compute-1 nova_compute[235132]: 2025-10-10 10:15:28.615 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:15:28 compute-1 nova_compute[235132]: 2025-10-10 10:15:28.618 2 DEBUG nova.virt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Emitting event <LifecycleEvent: 1760091328.5283499, bd82d620-e0e5-4fb1-b8a5-973cefbcd107 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:15:28 compute-1 nova_compute[235132]: 2025-10-10 10:15:28.619 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] VM Resumed (Lifecycle Event)
Oct 10 10:15:28 compute-1 nova_compute[235132]: 2025-10-10 10:15:28.642 2 INFO nova.compute.manager [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Took 8.52 seconds to spawn the instance on the hypervisor.
Oct 10 10:15:28 compute-1 nova_compute[235132]: 2025-10-10 10:15:28.642 2 DEBUG nova.compute.manager [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:15:28 compute-1 nova_compute[235132]: 2025-10-10 10:15:28.644 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:15:28 compute-1 nova_compute[235132]: 2025-10-10 10:15:28.652 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 10 10:15:28 compute-1 nova_compute[235132]: 2025-10-10 10:15:28.685 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 10 10:15:28 compute-1 nova_compute[235132]: 2025-10-10 10:15:28.709 2 INFO nova.compute.manager [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Took 9.45 seconds to build instance.
Oct 10 10:15:28 compute-1 nova_compute[235132]: 2025-10-10 10:15:28.723 2 DEBUG oslo_concurrency.lockutils [None req-7217c05c-6ef1-47a6-bff2-adfe99ddc10b 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.533s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:15:29 compute-1 nova_compute[235132]: 2025-10-10 10:15:29.713 2 DEBUG nova.compute.manager [req-4ffba520-83b1-4551-94ae-cf05f59862ac req-e9cc2d68-1c14-4c92-af6c-368bfc9b51dd 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received event network-vif-plugged-562e8418-d47e-4fd1-8a23-094e0ce40097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:15:29 compute-1 nova_compute[235132]: 2025-10-10 10:15:29.713 2 DEBUG oslo_concurrency.lockutils [req-4ffba520-83b1-4551-94ae-cf05f59862ac req-e9cc2d68-1c14-4c92-af6c-368bfc9b51dd 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:15:29 compute-1 nova_compute[235132]: 2025-10-10 10:15:29.714 2 DEBUG oslo_concurrency.lockutils [req-4ffba520-83b1-4551-94ae-cf05f59862ac req-e9cc2d68-1c14-4c92-af6c-368bfc9b51dd 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:15:29 compute-1 nova_compute[235132]: 2025-10-10 10:15:29.714 2 DEBUG oslo_concurrency.lockutils [req-4ffba520-83b1-4551-94ae-cf05f59862ac req-e9cc2d68-1c14-4c92-af6c-368bfc9b51dd 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:15:29 compute-1 nova_compute[235132]: 2025-10-10 10:15:29.714 2 DEBUG nova.compute.manager [req-4ffba520-83b1-4551-94ae-cf05f59862ac req-e9cc2d68-1c14-4c92-af6c-368bfc9b51dd 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] No waiting events found dispatching network-vif-plugged-562e8418-d47e-4fd1-8a23-094e0ce40097 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:15:29 compute-1 nova_compute[235132]: 2025-10-10 10:15:29.714 2 WARNING nova.compute.manager [req-4ffba520-83b1-4551-94ae-cf05f59862ac req-e9cc2d68-1c14-4c92-af6c-368bfc9b51dd 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received unexpected event network-vif-plugged-562e8418-d47e-4fd1-8a23-094e0ce40097 for instance with vm_state active and task_state None.
Oct 10 10:15:30 compute-1 ceph-mon[79167]: pgmap v886: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 10 10:15:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:30.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:30.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:31 compute-1 nova_compute[235132]: 2025-10-10 10:15:31.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:32.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:32 compute-1 ceph-mon[79167]: pgmap v887: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct 10 10:15:32 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:15:32 compute-1 ovn_controller[131749]: 2025-10-10T10:15:32Z|00072|binding|INFO|Releasing lport 318e6d8e-f58f-407d-854f-d27adc402b34 from this chassis (sb_readonly=0)
Oct 10 10:15:32 compute-1 NetworkManager[44982]: <info>  [1760091332.1507] manager: (patch-br-int-to-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Oct 10 10:15:32 compute-1 nova_compute[235132]: 2025-10-10 10:15:32.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:32 compute-1 NetworkManager[44982]: <info>  [1760091332.1517] manager: (patch-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Oct 10 10:15:32 compute-1 ovn_controller[131749]: 2025-10-10T10:15:32Z|00073|binding|INFO|Releasing lport 318e6d8e-f58f-407d-854f-d27adc402b34 from this chassis (sb_readonly=0)
Oct 10 10:15:32 compute-1 nova_compute[235132]: 2025-10-10 10:15:32.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:32 compute-1 nova_compute[235132]: 2025-10-10 10:15:32.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:15:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:15:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:32.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:15:32 compute-1 nova_compute[235132]: 2025-10-10 10:15:32.640 2 DEBUG nova.compute.manager [req-5a748aae-068c-4929-9874-4fd98a3bc8e6 req-5ad978d1-103f-4734-84c5-c218087d1f28 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received event network-changed-562e8418-d47e-4fd1-8a23-094e0ce40097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:15:32 compute-1 nova_compute[235132]: 2025-10-10 10:15:32.641 2 DEBUG nova.compute.manager [req-5a748aae-068c-4929-9874-4fd98a3bc8e6 req-5ad978d1-103f-4734-84c5-c218087d1f28 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Refreshing instance network info cache due to event network-changed-562e8418-d47e-4fd1-8a23-094e0ce40097. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 10 10:15:32 compute-1 nova_compute[235132]: 2025-10-10 10:15:32.641 2 DEBUG oslo_concurrency.lockutils [req-5a748aae-068c-4929-9874-4fd98a3bc8e6 req-5ad978d1-103f-4734-84c5-c218087d1f28 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:15:32 compute-1 nova_compute[235132]: 2025-10-10 10:15:32.642 2 DEBUG oslo_concurrency.lockutils [req-5a748aae-068c-4929-9874-4fd98a3bc8e6 req-5ad978d1-103f-4734-84c5-c218087d1f28 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:15:32 compute-1 nova_compute[235132]: 2025-10-10 10:15:32.642 2 DEBUG nova.network.neutron [req-5a748aae-068c-4929-9874-4fd98a3bc8e6 req-5ad978d1-103f-4734-84c5-c218087d1f28 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Refreshing network info cache for port 562e8418-d47e-4fd1-8a23-094e0ce40097 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 10 10:15:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:15:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:15:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:15:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:15:33 compute-1 nova_compute[235132]: 2025-10-10 10:15:33.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:34 compute-1 nova_compute[235132]: 2025-10-10 10:15:34.029 2 DEBUG nova.network.neutron [req-5a748aae-068c-4929-9874-4fd98a3bc8e6 req-5ad978d1-103f-4734-84c5-c218087d1f28 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updated VIF entry in instance network info cache for port 562e8418-d47e-4fd1-8a23-094e0ce40097. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 10 10:15:34 compute-1 nova_compute[235132]: 2025-10-10 10:15:34.030 2 DEBUG nova.network.neutron [req-5a748aae-068c-4929-9874-4fd98a3bc8e6 req-5ad978d1-103f-4734-84c5-c218087d1f28 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updating instance_info_cache with network_info: [{"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:15:34 compute-1 nova_compute[235132]: 2025-10-10 10:15:34.049 2 DEBUG oslo_concurrency.lockutils [req-5a748aae-068c-4929-9874-4fd98a3bc8e6 req-5ad978d1-103f-4734-84c5-c218087d1f28 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:15:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:15:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:34.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:15:34 compute-1 ceph-mon[79167]: pgmap v888: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 10 10:15:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:34.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:36 compute-1 nova_compute[235132]: 2025-10-10 10:15:36.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:36.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:36 compute-1 ceph-mon[79167]: pgmap v889: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 10 10:15:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:36.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:15:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:15:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:15:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:15:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:15:38 compute-1 nova_compute[235132]: 2025-10-10 10:15:38.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:38.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:38 compute-1 ceph-mon[79167]: pgmap v890: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 10 10:15:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:15:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:38.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:15:38 compute-1 sudo[243180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:15:38 compute-1 sudo[243180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:15:38 compute-1 sudo[243180]: pam_unix(sudo:session): session closed for user root
Oct 10 10:15:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:15:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:40.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:15:40 compute-1 ceph-mon[79167]: pgmap v891: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Oct 10 10:15:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:40.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:40 compute-1 ovn_controller[131749]: 2025-10-10T10:15:40Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:73:fc:1f 10.100.0.12
Oct 10 10:15:40 compute-1 ovn_controller[131749]: 2025-10-10T10:15:40Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:73:fc:1f 10.100.0.12
Oct 10 10:15:41 compute-1 nova_compute[235132]: 2025-10-10 10:15:41.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:41 compute-1 sudo[243207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:15:41 compute-1 sudo[243207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:15:41 compute-1 sudo[243207]: pam_unix(sudo:session): session closed for user root
Oct 10 10:15:41 compute-1 sudo[243232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:15:41 compute-1 sudo[243232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:15:41 compute-1 sudo[243232]: pam_unix(sudo:session): session closed for user root
Oct 10 10:15:41 compute-1 sudo[243290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:15:42 compute-1 sudo[243290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:15:42 compute-1 sudo[243290]: pam_unix(sudo:session): session closed for user root
Oct 10 10:15:42 compute-1 sudo[243315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 21f084a3-af34-5230-afe4-ea5cd24a55f4 -- inventory --format=json-pretty --filter-for-batch
Oct 10 10:15:42 compute-1 sudo[243315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:15:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:42.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:42.211 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:15:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:42.211 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:15:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:42.212 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:15:42 compute-1 ceph-mon[79167]: pgmap v892: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 69 op/s
Oct 10 10:15:42 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:15:42 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:15:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:15:42 compute-1 podman[243381]: 2025-10-10 10:15:42.515268739 +0000 UTC m=+0.076397246 container create 9d82302d2d448f3f459213ca268214c6d21411f6d2c7889f19547dc85ea571d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_fermat, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:15:42 compute-1 podman[243381]: 2025-10-10 10:15:42.467524569 +0000 UTC m=+0.028653066 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:15:42 compute-1 systemd[1]: Started libpod-conmon-9d82302d2d448f3f459213ca268214c6d21411f6d2c7889f19547dc85ea571d5.scope.
Oct 10 10:15:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:15:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:42.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:15:42 compute-1 systemd[1]: Started libcrun container.
Oct 10 10:15:42 compute-1 podman[243381]: 2025-10-10 10:15:42.630164759 +0000 UTC m=+0.191293296 container init 9d82302d2d448f3f459213ca268214c6d21411f6d2c7889f19547dc85ea571d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:15:42 compute-1 podman[243381]: 2025-10-10 10:15:42.643616657 +0000 UTC m=+0.204745144 container start 9d82302d2d448f3f459213ca268214c6d21411f6d2c7889f19547dc85ea571d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_fermat, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Oct 10 10:15:42 compute-1 podman[243381]: 2025-10-10 10:15:42.647518454 +0000 UTC m=+0.208646981 container attach 9d82302d2d448f3f459213ca268214c6d21411f6d2c7889f19547dc85ea571d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_fermat, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 10:15:42 compute-1 awesome_fermat[243398]: 167 167
Oct 10 10:15:42 compute-1 systemd[1]: libpod-9d82302d2d448f3f459213ca268214c6d21411f6d2c7889f19547dc85ea571d5.scope: Deactivated successfully.
Oct 10 10:15:42 compute-1 podman[243381]: 2025-10-10 10:15:42.656741567 +0000 UTC m=+0.217870054 container died 9d82302d2d448f3f459213ca268214c6d21411f6d2c7889f19547dc85ea571d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_fermat, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:15:42 compute-1 systemd[1]: var-lib-containers-storage-overlay-a414090e3ef29d589aa25f38409169e23c7f2325f26c0ccb3a002c371e89aebf-merged.mount: Deactivated successfully.
Oct 10 10:15:42 compute-1 podman[243381]: 2025-10-10 10:15:42.70135376 +0000 UTC m=+0.262482237 container remove 9d82302d2d448f3f459213ca268214c6d21411f6d2c7889f19547dc85ea571d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_fermat, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 10 10:15:42 compute-1 systemd[1]: libpod-conmon-9d82302d2d448f3f459213ca268214c6d21411f6d2c7889f19547dc85ea571d5.scope: Deactivated successfully.
Oct 10 10:15:42 compute-1 podman[243422]: 2025-10-10 10:15:42.954247833 +0000 UTC m=+0.082764860 container create 148401d5c360810a926f5b6b82c241e641015dcefd162defebcbdf87ab8c96ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_hypatia, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 10:15:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:15:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:15:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:15:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:15:43 compute-1 systemd[1]: Started libpod-conmon-148401d5c360810a926f5b6b82c241e641015dcefd162defebcbdf87ab8c96ef.scope.
Oct 10 10:15:43 compute-1 podman[243422]: 2025-10-10 10:15:42.919289975 +0000 UTC m=+0.047807052 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 10 10:15:43 compute-1 systemd[1]: Started libcrun container.
Oct 10 10:15:43 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0987c4f280fd9dc9e42a49a851cd32a183cd87663d67540783ccc4b2c7668c46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 10 10:15:43 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0987c4f280fd9dc9e42a49a851cd32a183cd87663d67540783ccc4b2c7668c46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 10 10:15:43 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0987c4f280fd9dc9e42a49a851cd32a183cd87663d67540783ccc4b2c7668c46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 10 10:15:43 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0987c4f280fd9dc9e42a49a851cd32a183cd87663d67540783ccc4b2c7668c46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 10 10:15:43 compute-1 podman[243422]: 2025-10-10 10:15:43.066023247 +0000 UTC m=+0.194540324 container init 148401d5c360810a926f5b6b82c241e641015dcefd162defebcbdf87ab8c96ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_hypatia, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Oct 10 10:15:43 compute-1 podman[243422]: 2025-10-10 10:15:43.080242757 +0000 UTC m=+0.208759784 container start 148401d5c360810a926f5b6b82c241e641015dcefd162defebcbdf87ab8c96ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_hypatia, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:15:43 compute-1 podman[243422]: 2025-10-10 10:15:43.084768781 +0000 UTC m=+0.213285868 container attach 148401d5c360810a926f5b6b82c241e641015dcefd162defebcbdf87ab8c96ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 10 10:15:43 compute-1 nova_compute[235132]: 2025-10-10 10:15:43.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]: [
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:     {
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:         "available": false,
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:         "being_replaced": false,
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:         "ceph_device_lvm": false,
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:         "lsm_data": {},
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:         "lvs": [],
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:         "path": "/dev/sr0",
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:         "rejected_reasons": [
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:             "Insufficient space (<5GB)",
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:             "Has a FileSystem"
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:         ],
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:         "sys_api": {
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:             "actuators": null,
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:             "device_nodes": [
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:                 "sr0"
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:             ],
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:             "devname": "sr0",
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:             "human_readable_size": "482.00 KB",
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:             "id_bus": "ata",
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:             "model": "QEMU DVD-ROM",
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:             "nr_requests": "2",
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:             "parent": "/dev/sr0",
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:             "partitions": {},
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:             "path": "/dev/sr0",
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:             "removable": "1",
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:             "rev": "2.5+",
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:             "ro": "0",
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:             "rotational": "0",
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:             "sas_address": "",
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:             "sas_device_handle": "",
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:             "scheduler_mode": "mq-deadline",
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:             "sectors": 0,
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:             "sectorsize": "2048",
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:             "size": 493568.0,
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:             "support_discard": "2048",
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:             "type": "disk",
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:             "vendor": "QEMU"
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:         }
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]:     }
Oct 10 10:15:44 compute-1 condescending_hypatia[243438]: ]
Oct 10 10:15:44 compute-1 systemd[1]: libpod-148401d5c360810a926f5b6b82c241e641015dcefd162defebcbdf87ab8c96ef.scope: Deactivated successfully.
Oct 10 10:15:44 compute-1 podman[243422]: 2025-10-10 10:15:44.052480589 +0000 UTC m=+1.180997636 container died 148401d5c360810a926f5b6b82c241e641015dcefd162defebcbdf87ab8c96ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_hypatia, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct 10 10:15:44 compute-1 systemd[1]: var-lib-containers-storage-overlay-0987c4f280fd9dc9e42a49a851cd32a183cd87663d67540783ccc4b2c7668c46-merged.mount: Deactivated successfully.
Oct 10 10:15:44 compute-1 podman[243422]: 2025-10-10 10:15:44.105154663 +0000 UTC m=+1.233671660 container remove 148401d5c360810a926f5b6b82c241e641015dcefd162defebcbdf87ab8c96ef (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Oct 10 10:15:44 compute-1 systemd[1]: libpod-conmon-148401d5c360810a926f5b6b82c241e641015dcefd162defebcbdf87ab8c96ef.scope: Deactivated successfully.
Oct 10 10:15:44 compute-1 podman[244757]: 2025-10-10 10:15:44.155411991 +0000 UTC m=+0.073826654 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 10 10:15:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:44 compute-1 sudo[243315]: pam_unix(sudo:session): session closed for user root
Oct 10 10:15:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:44.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:44 compute-1 ceph-mon[79167]: pgmap v893: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct 10 10:15:44 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:15:44 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:15:44 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:15:44 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:15:44 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:15:44 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:15:44 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:15:44 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:15:44 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:15:44 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:15:44 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:15:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:44.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:46 compute-1 nova_compute[235132]: 2025-10-10 10:15:46.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:46.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:46 compute-1 ceph-mon[79167]: pgmap v894: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 10 10:15:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:46.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:47 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:15:47 compute-1 nova_compute[235132]: 2025-10-10 10:15:47.269 2 INFO nova.compute.manager [None req-57b961d1-792e-4b4e-afa0-cfec45a9528e 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Get console output
Oct 10 10:15:47 compute-1 nova_compute[235132]: 2025-10-10 10:15:47.277 631 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 10 10:15:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:15:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:15:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:15:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:15:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:15:48 compute-1 nova_compute[235132]: 2025-10-10 10:15:48.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:48.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:48 compute-1 ceph-mon[79167]: pgmap v895: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 10 10:15:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:15:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:48.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:15:48 compute-1 sudo[244789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:15:49 compute-1 sudo[244789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:15:49 compute-1 sudo[244789]: pam_unix(sudo:session): session closed for user root
Oct 10 10:15:49 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:15:49 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:15:49 compute-1 ceph-mon[79167]: pgmap v896: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:15:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:50.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:50 compute-1 nova_compute[235132]: 2025-10-10 10:15:50.575 2 DEBUG oslo_concurrency.lockutils [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "interface-bd82d620-e0e5-4fb1-b8a5-973cefbcd107-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:15:50 compute-1 nova_compute[235132]: 2025-10-10 10:15:50.576 2 DEBUG oslo_concurrency.lockutils [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "interface-bd82d620-e0e5-4fb1-b8a5-973cefbcd107-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:15:50 compute-1 nova_compute[235132]: 2025-10-10 10:15:50.577 2 DEBUG nova.objects.instance [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'flavor' on Instance uuid bd82d620-e0e5-4fb1-b8a5-973cefbcd107 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:15:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:15:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:50.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:15:50 compute-1 nova_compute[235132]: 2025-10-10 10:15:50.994 2 DEBUG nova.objects.instance [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'pci_requests' on Instance uuid bd82d620-e0e5-4fb1-b8a5-973cefbcd107 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:15:51 compute-1 nova_compute[235132]: 2025-10-10 10:15:51.010 2 DEBUG nova.network.neutron [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 10 10:15:51 compute-1 nova_compute[235132]: 2025-10-10 10:15:51.185 2 DEBUG nova.policy [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7956778c03764aaf8906c9b435337976', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 10 10:15:51 compute-1 nova_compute[235132]: 2025-10-10 10:15:51.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:51 compute-1 nova_compute[235132]: 2025-10-10 10:15:51.820 2 DEBUG nova.network.neutron [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Successfully created port: a6efe4ab-2a26-46aa-8bf2-3dda99ea238c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 10 10:15:52 compute-1 nova_compute[235132]: 2025-10-10 10:15:52.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:15:52 compute-1 nova_compute[235132]: 2025-10-10 10:15:52.044 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:15:52 compute-1 ceph-mon[79167]: pgmap v897: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 10 10:15:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:15:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:52.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:15:52 compute-1 nova_compute[235132]: 2025-10-10 10:15:52.202 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:15:52 compute-1 nova_compute[235132]: 2025-10-10 10:15:52.227 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Triggering sync for uuid bd82d620-e0e5-4fb1-b8a5-973cefbcd107 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Oct 10 10:15:52 compute-1 nova_compute[235132]: 2025-10-10 10:15:52.227 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:15:52 compute-1 nova_compute[235132]: 2025-10-10 10:15:52.228 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:15:52 compute-1 nova_compute[235132]: 2025-10-10 10:15:52.288 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:15:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:15:52 compute-1 nova_compute[235132]: 2025-10-10 10:15:52.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:52 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:52.483 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:15:52 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:52.484 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 10 10:15:52 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:52.484 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ee0899c1-415d-4aa8-abe8-1240b4e8bf2c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:15:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:15:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:52.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:15:52 compute-1 nova_compute[235132]: 2025-10-10 10:15:52.695 2 DEBUG nova.network.neutron [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Successfully updated port: a6efe4ab-2a26-46aa-8bf2-3dda99ea238c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 10 10:15:52 compute-1 nova_compute[235132]: 2025-10-10 10:15:52.711 2 DEBUG oslo_concurrency.lockutils [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:15:52 compute-1 nova_compute[235132]: 2025-10-10 10:15:52.712 2 DEBUG oslo_concurrency.lockutils [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquired lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:15:52 compute-1 nova_compute[235132]: 2025-10-10 10:15:52.712 2 DEBUG nova.network.neutron [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 10 10:15:52 compute-1 nova_compute[235132]: 2025-10-10 10:15:52.808 2 DEBUG nova.compute.manager [req-cdcefc1f-05b8-4caf-83f6-012e262290ed req-96c80ee4-f87e-4534-9204-76b4b2576b0a 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received event network-changed-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:15:52 compute-1 nova_compute[235132]: 2025-10-10 10:15:52.809 2 DEBUG nova.compute.manager [req-cdcefc1f-05b8-4caf-83f6-012e262290ed req-96c80ee4-f87e-4534-9204-76b4b2576b0a 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Refreshing instance network info cache due to event network-changed-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 10 10:15:52 compute-1 nova_compute[235132]: 2025-10-10 10:15:52.809 2 DEBUG oslo_concurrency.lockutils [req-cdcefc1f-05b8-4caf-83f6-012e262290ed req-96c80ee4-f87e-4534-9204-76b4b2576b0a 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:15:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:15:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:15:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:15:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:15:53 compute-1 nova_compute[235132]: 2025-10-10 10:15:53.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:15:53 compute-1 nova_compute[235132]: 2025-10-10 10:15:53.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:15:53 compute-1 nova_compute[235132]: 2025-10-10 10:15:53.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:15:53 compute-1 nova_compute[235132]: 2025-10-10 10:15:53.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:15:53 compute-1 nova_compute[235132]: 2025-10-10 10:15:53.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:54 compute-1 ceph-mon[79167]: pgmap v898: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:15:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:15:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:54.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:15:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:54.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.060 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.061 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.061 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.079 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 10 10:15:55 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2472353382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.546 2 DEBUG nova.network.neutron [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updating instance_info_cache with network_info: [{"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "address": "fa:16:3e:ae:b1:45", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6efe4ab-2a", "ovs_interfaceid": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.573 2 DEBUG oslo_concurrency.lockutils [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Releasing lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.574 2 DEBUG oslo_concurrency.lockutils [req-cdcefc1f-05b8-4caf-83f6-012e262290ed req-96c80ee4-f87e-4534-9204-76b4b2576b0a 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.574 2 DEBUG nova.network.neutron [req-cdcefc1f-05b8-4caf-83f6-012e262290ed req-96c80ee4-f87e-4534-9204-76b4b2576b0a 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Refreshing network info cache for port a6efe4ab-2a26-46aa-8bf2-3dda99ea238c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.578 2 DEBUG nova.virt.libvirt.vif [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:15:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-217348562',display_name='tempest-TestNetworkBasicOps-server-217348562',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-217348562',id=6,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHLzdAHDdUURWXD0NwbnzkKciFKEQ2omFqKVpiZUQ/jQkwx0IlaJ48FUTUghTozEFkbgWKl3XHIfnAKs6ai2Am8DZErVGD6iO1tzsuGiO5n1KsYJdS5ZP3lMvRFTeABsRg==',key_name='tempest-TestNetworkBasicOps-1625432950',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:15:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-qt8amg0k',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:15:28Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=bd82d620-e0e5-4fb1-b8a5-973cefbcd107,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "address": "fa:16:3e:ae:b1:45", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6efe4ab-2a", "ovs_interfaceid": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.578 2 DEBUG nova.network.os_vif_util [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "address": "fa:16:3e:ae:b1:45", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6efe4ab-2a", "ovs_interfaceid": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.579 2 DEBUG nova.network.os_vif_util [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:b1:45,bridge_name='br-int',has_traffic_filtering=True,id=a6efe4ab-2a26-46aa-8bf2-3dda99ea238c,network=Network(87f6394d-4290-4eca-8ba0-18711f3ad6e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6efe4ab-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.579 2 DEBUG os_vif [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:b1:45,bridge_name='br-int',has_traffic_filtering=True,id=a6efe4ab-2a26-46aa-8bf2-3dda99ea238c,network=Network(87f6394d-4290-4eca-8ba0-18711f3ad6e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6efe4ab-2a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.580 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.581 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.585 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa6efe4ab-2a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.585 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa6efe4ab-2a, col_values=(('external_ids', {'iface-id': 'a6efe4ab-2a26-46aa-8bf2-3dda99ea238c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ae:b1:45', 'vm-uuid': 'bd82d620-e0e5-4fb1-b8a5-973cefbcd107'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:55 compute-1 NetworkManager[44982]: <info>  [1760091355.5898] manager: (tapa6efe4ab-2a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.600 2 INFO os_vif [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ae:b1:45,bridge_name='br-int',has_traffic_filtering=True,id=a6efe4ab-2a26-46aa-8bf2-3dda99ea238c,network=Network(87f6394d-4290-4eca-8ba0-18711f3ad6e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6efe4ab-2a')
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.601 2 DEBUG nova.virt.libvirt.vif [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:15:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-217348562',display_name='tempest-TestNetworkBasicOps-server-217348562',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-217348562',id=6,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHLzdAHDdUURWXD0NwbnzkKciFKEQ2omFqKVpiZUQ/jQkwx0IlaJ48FUTUghTozEFkbgWKl3XHIfnAKs6ai2Am8DZErVGD6iO1tzsuGiO5n1KsYJdS5ZP3lMvRFTeABsRg==',key_name='tempest-TestNetworkBasicOps-1625432950',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:15:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-qt8amg0k',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:15:28Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=bd82d620-e0e5-4fb1-b8a5-973cefbcd107,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "address": "fa:16:3e:ae:b1:45", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6efe4ab-2a", "ovs_interfaceid": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.601 2 DEBUG nova.network.os_vif_util [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "address": "fa:16:3e:ae:b1:45", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6efe4ab-2a", "ovs_interfaceid": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.602 2 DEBUG nova.network.os_vif_util [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ae:b1:45,bridge_name='br-int',has_traffic_filtering=True,id=a6efe4ab-2a26-46aa-8bf2-3dda99ea238c,network=Network(87f6394d-4290-4eca-8ba0-18711f3ad6e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6efe4ab-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.604 2 DEBUG nova.virt.libvirt.guest [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] attach device xml: <interface type="ethernet">
Oct 10 10:15:55 compute-1 nova_compute[235132]:   <mac address="fa:16:3e:ae:b1:45"/>
Oct 10 10:15:55 compute-1 nova_compute[235132]:   <model type="virtio"/>
Oct 10 10:15:55 compute-1 nova_compute[235132]:   <driver name="vhost" rx_queue_size="512"/>
Oct 10 10:15:55 compute-1 nova_compute[235132]:   <mtu size="1442"/>
Oct 10 10:15:55 compute-1 nova_compute[235132]:   <target dev="tapa6efe4ab-2a"/>
Oct 10 10:15:55 compute-1 nova_compute[235132]: </interface>
Oct 10 10:15:55 compute-1 nova_compute[235132]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 10 10:15:55 compute-1 kernel: tapa6efe4ab-2a: entered promiscuous mode
Oct 10 10:15:55 compute-1 NetworkManager[44982]: <info>  [1760091355.6254] manager: (tapa6efe4ab-2a): new Tun device (/org/freedesktop/NetworkManager/Devices/56)
Oct 10 10:15:55 compute-1 ovn_controller[131749]: 2025-10-10T10:15:55Z|00074|binding|INFO|Claiming lport a6efe4ab-2a26-46aa-8bf2-3dda99ea238c for this chassis.
Oct 10 10:15:55 compute-1 ovn_controller[131749]: 2025-10-10T10:15:55Z|00075|binding|INFO|a6efe4ab-2a26-46aa-8bf2-3dda99ea238c: Claiming fa:16:3e:ae:b1:45 10.100.0.29
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:55 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.641 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:b1:45 10.100.0.29'], port_security=['fa:16:3e:ae:b1:45 10.100.0.29'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.29/28', 'neutron:device_id': 'bd82d620-e0e5-4fb1-b8a5-973cefbcd107', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87f6394d-4290-4eca-8ba0-18711f3ad6e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '79abf760-0fb0-448c-b5c8-75027ac31ae3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=daddf600-eff8-433f-97e5-f9a5bf5367ce, chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], logical_port=a6efe4ab-2a26-46aa-8bf2-3dda99ea238c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:15:55 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.642 141156 INFO neutron.agent.ovn.metadata.agent [-] Port a6efe4ab-2a26-46aa-8bf2-3dda99ea238c in datapath 87f6394d-4290-4eca-8ba0-18711f3ad6e0 bound to our chassis
Oct 10 10:15:55 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.643 141156 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 87f6394d-4290-4eca-8ba0-18711f3ad6e0
Oct 10 10:15:55 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.654 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[5c552a08-c371-4cf6-996c-0da1878d09e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:15:55 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.656 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap87f6394d-41 in ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 10 10:15:55 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.658 238898 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap87f6394d-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 10 10:15:55 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.658 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[587f998f-b319-4892-8622-75f80a4acc60]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:15:55 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.659 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[751644b2-9258-4d14-89a6-d93b07dcd257]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:15:55 compute-1 systemd-udevd[244825]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:55 compute-1 ovn_controller[131749]: 2025-10-10T10:15:55Z|00076|binding|INFO|Setting lport a6efe4ab-2a26-46aa-8bf2-3dda99ea238c ovn-installed in OVS
Oct 10 10:15:55 compute-1 ovn_controller[131749]: 2025-10-10T10:15:55Z|00077|binding|INFO|Setting lport a6efe4ab-2a26-46aa-8bf2-3dda99ea238c up in Southbound
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:55 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.676 141275 DEBUG oslo.privsep.daemon [-] privsep: reply[42710e81-3978-4576-b7d5-91c505852f97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:15:55 compute-1 NetworkManager[44982]: <info>  [1760091355.6845] device (tapa6efe4ab-2a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 10:15:55 compute-1 NetworkManager[44982]: <info>  [1760091355.6857] device (tapa6efe4ab-2a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 10:15:55 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.701 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[7a52e845-7b40-49da-a9c0-b18402cd55e1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.732 2 DEBUG nova.virt.libvirt.driver [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.733 2 DEBUG nova.virt.libvirt.driver [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 10 10:15:55 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.732 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[a8bbb61d-36af-4075-9ba8-6680ae253e39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.733 2 DEBUG nova.virt.libvirt.driver [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No VIF found with MAC fa:16:3e:73:fc:1f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.734 2 DEBUG nova.virt.libvirt.driver [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No VIF found with MAC fa:16:3e:ae:b1:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 10 10:15:55 compute-1 NetworkManager[44982]: <info>  [1760091355.7381] manager: (tap87f6394d-40): new Veth device (/org/freedesktop/NetworkManager/Devices/57)
Oct 10 10:15:55 compute-1 systemd-udevd[244828]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 10:15:55 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.737 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[c0cad625-673e-48ac-a1e7-d4ab66446b30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.756 2 DEBUG nova.virt.libvirt.guest [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 10:15:55 compute-1 nova_compute[235132]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 10:15:55 compute-1 nova_compute[235132]:   <nova:name>tempest-TestNetworkBasicOps-server-217348562</nova:name>
Oct 10 10:15:55 compute-1 nova_compute[235132]:   <nova:creationTime>2025-10-10 10:15:55</nova:creationTime>
Oct 10 10:15:55 compute-1 nova_compute[235132]:   <nova:flavor name="m1.nano">
Oct 10 10:15:55 compute-1 nova_compute[235132]:     <nova:memory>128</nova:memory>
Oct 10 10:15:55 compute-1 nova_compute[235132]:     <nova:disk>1</nova:disk>
Oct 10 10:15:55 compute-1 nova_compute[235132]:     <nova:swap>0</nova:swap>
Oct 10 10:15:55 compute-1 nova_compute[235132]:     <nova:ephemeral>0</nova:ephemeral>
Oct 10 10:15:55 compute-1 nova_compute[235132]:     <nova:vcpus>1</nova:vcpus>
Oct 10 10:15:55 compute-1 nova_compute[235132]:   </nova:flavor>
Oct 10 10:15:55 compute-1 nova_compute[235132]:   <nova:owner>
Oct 10 10:15:55 compute-1 nova_compute[235132]:     <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 10:15:55 compute-1 nova_compute[235132]:     <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 10:15:55 compute-1 nova_compute[235132]:   </nova:owner>
Oct 10 10:15:55 compute-1 nova_compute[235132]:   <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 10:15:55 compute-1 nova_compute[235132]:   <nova:ports>
Oct 10 10:15:55 compute-1 nova_compute[235132]:     <nova:port uuid="562e8418-d47e-4fd1-8a23-094e0ce40097">
Oct 10 10:15:55 compute-1 nova_compute[235132]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 10 10:15:55 compute-1 nova_compute[235132]:     </nova:port>
Oct 10 10:15:55 compute-1 nova_compute[235132]:     <nova:port uuid="a6efe4ab-2a26-46aa-8bf2-3dda99ea238c">
Oct 10 10:15:55 compute-1 nova_compute[235132]:       <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Oct 10 10:15:55 compute-1 nova_compute[235132]:     </nova:port>
Oct 10 10:15:55 compute-1 nova_compute[235132]:   </nova:ports>
Oct 10 10:15:55 compute-1 nova_compute[235132]: </nova:instance>
Oct 10 10:15:55 compute-1 nova_compute[235132]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 10 10:15:55 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.769 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[5090c819-26a2-43d5-809f-cf83364f21f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:15:55 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.772 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[7d939fde-d06c-4a26-ac1a-9b7d9aff574b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.793 2 DEBUG oslo_concurrency.lockutils [None req-566fb8e4-ca4b-4ea3-99b6-e0905dfcebfa 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "interface-bd82d620-e0e5-4fb1-b8a5-973cefbcd107-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.217s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:15:55 compute-1 NetworkManager[44982]: <info>  [1760091355.7947] device (tap87f6394d-40): carrier: link connected
Oct 10 10:15:55 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.798 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[0ea55b6e-0c89-4741-a33f-bf78a22a42cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:15:55 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.818 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[34bfea6d-600b-400b-b6eb-7976256195d9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87f6394d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:68:a4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 426950, 'reachable_time': 43037, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244851, 'error': None, 'target': 'ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:15:55 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.832 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[01df94ba-0a71-41d5-8f18-7eb886485059]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea4:68a4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 426950, 'tstamp': 426950}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244852, 'error': None, 'target': 'ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:15:55 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.849 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[e82fc97d-c7cc-44b5-a8e0-64a9b78cb3e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87f6394d-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:68:a4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 426950, 'reachable_time': 43037, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 244853, 'error': None, 'target': 'ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:15:55 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.883 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[ab5bd2b7-60a4-4719-a3bf-1031c2162937]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:15:55 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.978 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[9bd7c497-ba00-41f8-8164-9a333b2de116]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:15:55 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.980 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87f6394d-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:15:55 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.980 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 10 10:15:55 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.981 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap87f6394d-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:15:55 compute-1 kernel: tap87f6394d-40: entered promiscuous mode
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:55 compute-1 NetworkManager[44982]: <info>  [1760091355.9845] manager: (tap87f6394d-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Oct 10 10:15:55 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.992 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap87f6394d-40, col_values=(('external_ids', {'iface-id': '25f0e25b-e08d-4c72-b1cf-e3d546e34451'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.992 2 DEBUG nova.compute.manager [req-e648ded3-fc9b-4637-af0f-326b1961a29b req-0be89d06-e743-4839-b2d5-a1d6ccb49c31 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received event network-vif-plugged-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.992 2 DEBUG oslo_concurrency.lockutils [req-e648ded3-fc9b-4637-af0f-326b1961a29b req-0be89d06-e743-4839-b2d5-a1d6ccb49c31 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.993 2 DEBUG oslo_concurrency.lockutils [req-e648ded3-fc9b-4637-af0f-326b1961a29b req-0be89d06-e743-4839-b2d5-a1d6ccb49c31 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.993 2 DEBUG oslo_concurrency.lockutils [req-e648ded3-fc9b-4637-af0f-326b1961a29b req-0be89d06-e743-4839-b2d5-a1d6ccb49c31 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.994 2 DEBUG nova.compute.manager [req-e648ded3-fc9b-4637-af0f-326b1961a29b req-0be89d06-e743-4839-b2d5-a1d6ccb49c31 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] No waiting events found dispatching network-vif-plugged-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:15:55 compute-1 ovn_controller[131749]: 2025-10-10T10:15:55Z|00078|binding|INFO|Releasing lport 25f0e25b-e08d-4c72-b1cf-e3d546e34451 from this chassis (sb_readonly=0)
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.994 2 WARNING nova.compute.manager [req-e648ded3-fc9b-4637-af0f-326b1961a29b req-0be89d06-e743-4839-b2d5-a1d6ccb49c31 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received unexpected event network-vif-plugged-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c for instance with vm_state active and task_state None.
Oct 10 10:15:55 compute-1 nova_compute[235132]: 2025-10-10 10:15:55.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:55 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:55.998 141156 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/87f6394d-4290-4eca-8ba0-18711f3ad6e0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/87f6394d-4290-4eca-8ba0-18711f3ad6e0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 10 10:15:56 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:56.000 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[8e93a820-c11c-4f84-b70e-2113bc9b1bc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:15:56 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:56.001 141156 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 10:15:56 compute-1 ovn_metadata_agent[141151]: global
Oct 10 10:15:56 compute-1 ovn_metadata_agent[141151]:     log         /dev/log local0 debug
Oct 10 10:15:56 compute-1 ovn_metadata_agent[141151]:     log-tag     haproxy-metadata-proxy-87f6394d-4290-4eca-8ba0-18711f3ad6e0
Oct 10 10:15:56 compute-1 ovn_metadata_agent[141151]:     user        root
Oct 10 10:15:56 compute-1 ovn_metadata_agent[141151]:     group       root
Oct 10 10:15:56 compute-1 ovn_metadata_agent[141151]:     maxconn     1024
Oct 10 10:15:56 compute-1 ovn_metadata_agent[141151]:     pidfile     /var/lib/neutron/external/pids/87f6394d-4290-4eca-8ba0-18711f3ad6e0.pid.haproxy
Oct 10 10:15:56 compute-1 ovn_metadata_agent[141151]:     daemon
Oct 10 10:15:56 compute-1 ovn_metadata_agent[141151]: 
Oct 10 10:15:56 compute-1 ovn_metadata_agent[141151]: defaults
Oct 10 10:15:56 compute-1 ovn_metadata_agent[141151]:     log global
Oct 10 10:15:56 compute-1 ovn_metadata_agent[141151]:     mode http
Oct 10 10:15:56 compute-1 ovn_metadata_agent[141151]:     option httplog
Oct 10 10:15:56 compute-1 ovn_metadata_agent[141151]:     option dontlognull
Oct 10 10:15:56 compute-1 ovn_metadata_agent[141151]:     option http-server-close
Oct 10 10:15:56 compute-1 ovn_metadata_agent[141151]:     option forwardfor
Oct 10 10:15:56 compute-1 ovn_metadata_agent[141151]:     retries                 3
Oct 10 10:15:56 compute-1 ovn_metadata_agent[141151]:     timeout http-request    30s
Oct 10 10:15:56 compute-1 ovn_metadata_agent[141151]:     timeout connect         30s
Oct 10 10:15:56 compute-1 ovn_metadata_agent[141151]:     timeout client          32s
Oct 10 10:15:56 compute-1 ovn_metadata_agent[141151]:     timeout server          32s
Oct 10 10:15:56 compute-1 ovn_metadata_agent[141151]:     timeout http-keep-alive 30s
Oct 10 10:15:56 compute-1 ovn_metadata_agent[141151]: 
Oct 10 10:15:56 compute-1 ovn_metadata_agent[141151]: 
Oct 10 10:15:56 compute-1 ovn_metadata_agent[141151]: listen listener
Oct 10 10:15:56 compute-1 ovn_metadata_agent[141151]:     bind 169.254.169.254:80
Oct 10 10:15:56 compute-1 ovn_metadata_agent[141151]:     server metadata /var/lib/neutron/metadata_proxy
Oct 10 10:15:56 compute-1 ovn_metadata_agent[141151]:     http-request add-header X-OVN-Network-ID 87f6394d-4290-4eca-8ba0-18711f3ad6e0
Oct 10 10:15:56 compute-1 ovn_metadata_agent[141151]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 10 10:15:56 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:15:56.004 141156 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0', 'env', 'PROCESS_TAG=haproxy-87f6394d-4290-4eca-8ba0-18711f3ad6e0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/87f6394d-4290-4eca-8ba0-18711f3ad6e0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 10 10:15:56 compute-1 nova_compute[235132]: 2025-10-10 10:15:56.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:56 compute-1 nova_compute[235132]: 2025-10-10 10:15:56.057 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:15:56 compute-1 nova_compute[235132]: 2025-10-10 10:15:56.091 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:15:56 compute-1 nova_compute[235132]: 2025-10-10 10:15:56.091 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:15:56 compute-1 nova_compute[235132]: 2025-10-10 10:15:56.091 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:15:56 compute-1 ceph-mon[79167]: pgmap v899: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 1 op/s
Oct 10 10:15:56 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/375310620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:15:56 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3100558740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:15:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:56.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:56 compute-1 podman[244885]: 2025-10-10 10:15:56.42917514 +0000 UTC m=+0.069186227 container create 24470c868dec800b6afc9c007cf2c5daf050700fd0047b50978eb69f4e5cb050 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct 10 10:15:56 compute-1 systemd[1]: Started libpod-conmon-24470c868dec800b6afc9c007cf2c5daf050700fd0047b50978eb69f4e5cb050.scope.
Oct 10 10:15:56 compute-1 podman[244885]: 2025-10-10 10:15:56.393500023 +0000 UTC m=+0.033511160 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 10 10:15:56 compute-1 systemd[1]: Started libcrun container.
Oct 10 10:15:56 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0e01352dbf0d1bebdf46980d76e9074c40ab7243819392e4c65167a834fa151/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 10:15:56 compute-1 podman[244885]: 2025-10-10 10:15:56.543556266 +0000 UTC m=+0.183567323 container init 24470c868dec800b6afc9c007cf2c5daf050700fd0047b50978eb69f4e5cb050 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:15:56 compute-1 podman[244885]: 2025-10-10 10:15:56.553253362 +0000 UTC m=+0.193264419 container start 24470c868dec800b6afc9c007cf2c5daf050700fd0047b50978eb69f4e5cb050 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 10 10:15:56 compute-1 neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0[244900]: [NOTICE]   (244904) : New worker (244906) forked
Oct 10 10:15:56 compute-1 neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0[244900]: [NOTICE]   (244904) : Loading success.
Oct 10 10:15:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:56.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:57 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/422541512' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:15:57 compute-1 nova_compute[235132]: 2025-10-10 10:15:57.148 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:15:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:15:57 compute-1 nova_compute[235132]: 2025-10-10 10:15:57.676 2 DEBUG nova.network.neutron [req-cdcefc1f-05b8-4caf-83f6-012e262290ed req-96c80ee4-f87e-4534-9204-76b4b2576b0a 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updated VIF entry in instance network info cache for port a6efe4ab-2a26-46aa-8bf2-3dda99ea238c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 10 10:15:57 compute-1 nova_compute[235132]: 2025-10-10 10:15:57.677 2 DEBUG nova.network.neutron [req-cdcefc1f-05b8-4caf-83f6-012e262290ed req-96c80ee4-f87e-4534-9204-76b4b2576b0a 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updating instance_info_cache with network_info: [{"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "address": "fa:16:3e:ae:b1:45", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6efe4ab-2a", "ovs_interfaceid": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:15:57 compute-1 nova_compute[235132]: 2025-10-10 10:15:57.701 2 DEBUG oslo_concurrency.lockutils [req-cdcefc1f-05b8-4caf-83f6-012e262290ed req-96c80ee4-f87e-4534-9204-76b4b2576b0a 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:15:57 compute-1 nova_compute[235132]: 2025-10-10 10:15:57.702 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquired lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:15:57 compute-1 nova_compute[235132]: 2025-10-10 10:15:57.702 2 DEBUG nova.network.neutron [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 10 10:15:57 compute-1 nova_compute[235132]: 2025-10-10 10:15:57.703 2 DEBUG nova.objects.instance [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lazy-loading 'info_cache' on Instance uuid bd82d620-e0e5-4fb1-b8a5-973cefbcd107 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:15:57 compute-1 podman[244917]: 2025-10-10 10:15:57.98897811 +0000 UTC m=+0.092161338 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true)
Oct 10 10:15:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:15:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:15:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:15:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:15:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:15:58 compute-1 podman[244916]: 2025-10-10 10:15:58.005103362 +0000 UTC m=+0.103719204 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct 10 10:15:58 compute-1 podman[244918]: 2025-10-10 10:15:58.039921406 +0000 UTC m=+0.128513874 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 10 10:15:58 compute-1 ovn_controller[131749]: 2025-10-10T10:15:58Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ae:b1:45 10.100.0.29
Oct 10 10:15:58 compute-1 ovn_controller[131749]: 2025-10-10T10:15:58Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ae:b1:45 10.100.0.29
Oct 10 10:15:58 compute-1 nova_compute[235132]: 2025-10-10 10:15:58.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:15:58 compute-1 nova_compute[235132]: 2025-10-10 10:15:58.128 2 DEBUG nova.compute.manager [req-ed2a6528-e0e6-4728-b03d-6a7435b0efe6 req-aac0f80c-36a2-4d45-90e2-e063bd7151df 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received event network-vif-plugged-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:15:58 compute-1 nova_compute[235132]: 2025-10-10 10:15:58.129 2 DEBUG oslo_concurrency.lockutils [req-ed2a6528-e0e6-4728-b03d-6a7435b0efe6 req-aac0f80c-36a2-4d45-90e2-e063bd7151df 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:15:58 compute-1 nova_compute[235132]: 2025-10-10 10:15:58.129 2 DEBUG oslo_concurrency.lockutils [req-ed2a6528-e0e6-4728-b03d-6a7435b0efe6 req-aac0f80c-36a2-4d45-90e2-e063bd7151df 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:15:58 compute-1 nova_compute[235132]: 2025-10-10 10:15:58.130 2 DEBUG oslo_concurrency.lockutils [req-ed2a6528-e0e6-4728-b03d-6a7435b0efe6 req-aac0f80c-36a2-4d45-90e2-e063bd7151df 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:15:58 compute-1 nova_compute[235132]: 2025-10-10 10:15:58.130 2 DEBUG nova.compute.manager [req-ed2a6528-e0e6-4728-b03d-6a7435b0efe6 req-aac0f80c-36a2-4d45-90e2-e063bd7151df 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] No waiting events found dispatching network-vif-plugged-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:15:58 compute-1 nova_compute[235132]: 2025-10-10 10:15:58.131 2 WARNING nova.compute.manager [req-ed2a6528-e0e6-4728-b03d-6a7435b0efe6 req-aac0f80c-36a2-4d45-90e2-e063bd7151df 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received unexpected event network-vif-plugged-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c for instance with vm_state active and task_state None.
Oct 10 10:15:58 compute-1 ceph-mon[79167]: pgmap v900: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 1 op/s
Oct 10 10:15:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:15:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:15:58.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:15:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:15:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:15:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:15:58.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:15:59 compute-1 sudo[244978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:15:59 compute-1 sudo[244978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:15:59 compute-1 sudo[244978]: pam_unix(sudo:session): session closed for user root
Oct 10 10:16:00 compute-1 ceph-mon[79167]: pgmap v901: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 14 KiB/s wr, 2 op/s
Oct 10 10:16:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:00.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:00 compute-1 nova_compute[235132]: 2025-10-10 10:16:00.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:16:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:00.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:16:01 compute-1 nova_compute[235132]: 2025-10-10 10:16:01.097 2 DEBUG nova.network.neutron [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updating instance_info_cache with network_info: [{"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "address": "fa:16:3e:ae:b1:45", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6efe4ab-2a", "ovs_interfaceid": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:16:01 compute-1 nova_compute[235132]: 2025-10-10 10:16:01.138 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Releasing lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:16:01 compute-1 nova_compute[235132]: 2025-10-10 10:16:01.139 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 10 10:16:01 compute-1 nova_compute[235132]: 2025-10-10 10:16:01.140 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:16:01 compute-1 nova_compute[235132]: 2025-10-10 10:16:01.140 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:16:01 compute-1 nova_compute[235132]: 2025-10-10 10:16:01.141 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:16:01 compute-1 nova_compute[235132]: 2025-10-10 10:16:01.165 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:16:01 compute-1 nova_compute[235132]: 2025-10-10 10:16:01.165 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:16:01 compute-1 nova_compute[235132]: 2025-10-10 10:16:01.166 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:16:01 compute-1 nova_compute[235132]: 2025-10-10 10:16:01.166 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:16:01 compute-1 nova_compute[235132]: 2025-10-10 10:16:01.167 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:16:01 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:16:01 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/924314656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:16:01 compute-1 nova_compute[235132]: 2025-10-10 10:16:01.664 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:16:01 compute-1 nova_compute[235132]: 2025-10-10 10:16:01.752 2 DEBUG nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 10 10:16:01 compute-1 nova_compute[235132]: 2025-10-10 10:16:01.753 2 DEBUG nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 10 10:16:01 compute-1 nova_compute[235132]: 2025-10-10 10:16:01.986 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:16:01 compute-1 nova_compute[235132]: 2025-10-10 10:16:01.988 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4710MB free_disk=59.942726135253906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:16:01 compute-1 nova_compute[235132]: 2025-10-10 10:16:01.988 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:16:01 compute-1 nova_compute[235132]: 2025-10-10 10:16:01.989 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:16:02 compute-1 ceph-mon[79167]: pgmap v902: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 2.0 KiB/s wr, 1 op/s
Oct 10 10:16:02 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:16:02 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/924314656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:16:02 compute-1 nova_compute[235132]: 2025-10-10 10:16:02.175 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Instance bd82d620-e0e5-4fb1-b8a5-973cefbcd107 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 10 10:16:02 compute-1 nova_compute[235132]: 2025-10-10 10:16:02.176 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:16:02 compute-1 nova_compute[235132]: 2025-10-10 10:16:02.176 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:16:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:02.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:02 compute-1 nova_compute[235132]: 2025-10-10 10:16:02.270 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Refreshing inventories for resource provider c9b2c4a3-cb19-4387-8719-36027e3cdaec _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 10 10:16:02 compute-1 nova_compute[235132]: 2025-10-10 10:16:02.357 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Updating ProviderTree inventory for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 10 10:16:02 compute-1 nova_compute[235132]: 2025-10-10 10:16:02.358 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Updating inventory in ProviderTree for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 10 10:16:02 compute-1 nova_compute[235132]: 2025-10-10 10:16:02.387 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Refreshing aggregate associations for resource provider c9b2c4a3-cb19-4387-8719-36027e3cdaec, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 10 10:16:02 compute-1 nova_compute[235132]: 2025-10-10 10:16:02.428 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Refreshing trait associations for resource provider c9b2c4a3-cb19-4387-8719-36027e3cdaec, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,HW_CPU_X86_AVX,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE42,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 10 10:16:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:16:02 compute-1 nova_compute[235132]: 2025-10-10 10:16:02.470 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:16:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:02.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:16:02 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/369795556' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:16:02 compute-1 nova_compute[235132]: 2025-10-10 10:16:02.962 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:16:02 compute-1 nova_compute[235132]: 2025-10-10 10:16:02.970 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:16:02 compute-1 nova_compute[235132]: 2025-10-10 10:16:02.994 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:16:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:16:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:16:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:16:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:16:03 compute-1 nova_compute[235132]: 2025-10-10 10:16:03.026 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:16:03 compute-1 nova_compute[235132]: 2025-10-10 10:16:03.027 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:16:03 compute-1 nova_compute[235132]: 2025-10-10 10:16:03.028 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:16:03 compute-1 nova_compute[235132]: 2025-10-10 10:16:03.029 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 10 10:16:03 compute-1 nova_compute[235132]: 2025-10-10 10:16:03.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:03 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/369795556' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:16:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:04.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:04 compute-1 ceph-mon[79167]: pgmap v903: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 7.3 KiB/s wr, 2 op/s
Oct 10 10:16:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:04.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:05 compute-1 nova_compute[235132]: 2025-10-10 10:16:05.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:16:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:06.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:16:06 compute-1 ceph-mon[79167]: pgmap v904: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 7.3 KiB/s wr, 1 op/s
Oct 10 10:16:06 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1563002844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:16:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:06.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:16:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:16:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:16:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:16:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:16:08 compute-1 nova_compute[235132]: 2025-10-10 10:16:08.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:08.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:08 compute-1 ceph-mon[79167]: pgmap v905: 353 pgs: 353 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 7.3 KiB/s wr, 1 op/s
Oct 10 10:16:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:08.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:16:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:10.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:16:10 compute-1 ceph-mon[79167]: pgmap v906: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 10 10:16:10 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1304454409' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:16:10 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2889548273' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:16:10 compute-1 nova_compute[235132]: 2025-10-10 10:16:10.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:10.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:12.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:12 compute-1 ceph-mon[79167]: pgmap v907: 353 pgs: 353 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:16:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:16:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:16:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:12.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:16:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:16:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:16:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:16:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:16:13 compute-1 nova_compute[235132]: 2025-10-10 10:16:13.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:14.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:14 compute-1 ceph-mon[79167]: pgmap v908: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Oct 10 10:16:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:16:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:14.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:16:14 compute-1 podman[245056]: 2025-10-10 10:16:14.956347997 +0000 UTC m=+0.059482611 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 10 10:16:15 compute-1 nova_compute[235132]: 2025-10-10 10:16:15.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:16.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:16 compute-1 ceph-mon[79167]: pgmap v909: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct 10 10:16:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:16.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:17 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:16:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:16:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:16:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:16:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:16:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:16:18 compute-1 nova_compute[235132]: 2025-10-10 10:16:18.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:18.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:18 compute-1 ceph-mon[79167]: pgmap v910: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct 10 10:16:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:18.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:19 compute-1 sudo[245077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:16:19 compute-1 sudo[245077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:16:19 compute-1 sudo[245077]: pam_unix(sudo:session): session closed for user root
Oct 10 10:16:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:20.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:20 compute-1 ceph-mon[79167]: pgmap v911: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Oct 10 10:16:20 compute-1 nova_compute[235132]: 2025-10-10 10:16:20.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:16:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:20.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:16:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:22.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:22 compute-1 ceph-mon[79167]: pgmap v912: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 75 op/s
Oct 10 10:16:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:16:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:22.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:16:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:16:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:16:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:16:23 compute-1 nova_compute[235132]: 2025-10-10 10:16:23.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:23 compute-1 ceph-mon[79167]: pgmap v913: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 77 op/s
Oct 10 10:16:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:24.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:24.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:25 compute-1 nova_compute[235132]: 2025-10-10 10:16:25.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:26 compute-1 ceph-mon[79167]: pgmap v914: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.4 KiB/s wr, 66 op/s
Oct 10 10:16:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:26.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:26.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/1213483499' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:16:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/1213483499' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:16:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:16:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:16:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:16:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:16:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:16:28 compute-1 ceph-mon[79167]: pgmap v915: 353 pgs: 353 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.4 KiB/s wr, 66 op/s
Oct 10 10:16:28 compute-1 nova_compute[235132]: 2025-10-10 10:16:28.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:28.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:28.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:28 compute-1 podman[245107]: 2025-10-10 10:16:28.981022606 +0000 UTC m=+0.079922222 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 10 10:16:28 compute-1 podman[245108]: 2025-10-10 10:16:28.991416791 +0000 UTC m=+0.083357906 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd)
Oct 10 10:16:29 compute-1 podman[245109]: 2025-10-10 10:16:29.002642759 +0000 UTC m=+0.094330888 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 10 10:16:30 compute-1 ceph-mon[79167]: pgmap v916: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct 10 10:16:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:16:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:30.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:16:30 compute-1 nova_compute[235132]: 2025-10-10 10:16:30.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:30.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:32 compute-1 ceph-mon[79167]: pgmap v917: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 288 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 10 10:16:32 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:16:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:32.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:16:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:32.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:16:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:16:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:16:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:16:33 compute-1 nova_compute[235132]: 2025-10-10 10:16:33.048 2 DEBUG nova.compute.manager [req-2a4021bd-f49f-468a-bc14-6b20c899b36a req-5b94a1ac-f410-479f-9845-5d506131ef23 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received event network-changed-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:16:33 compute-1 nova_compute[235132]: 2025-10-10 10:16:33.048 2 DEBUG nova.compute.manager [req-2a4021bd-f49f-468a-bc14-6b20c899b36a req-5b94a1ac-f410-479f-9845-5d506131ef23 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Refreshing instance network info cache due to event network-changed-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 10 10:16:33 compute-1 nova_compute[235132]: 2025-10-10 10:16:33.049 2 DEBUG oslo_concurrency.lockutils [req-2a4021bd-f49f-468a-bc14-6b20c899b36a req-5b94a1ac-f410-479f-9845-5d506131ef23 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:16:33 compute-1 nova_compute[235132]: 2025-10-10 10:16:33.049 2 DEBUG oslo_concurrency.lockutils [req-2a4021bd-f49f-468a-bc14-6b20c899b36a req-5b94a1ac-f410-479f-9845-5d506131ef23 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:16:33 compute-1 nova_compute[235132]: 2025-10-10 10:16:33.050 2 DEBUG nova.network.neutron [req-2a4021bd-f49f-468a-bc14-6b20c899b36a req-5b94a1ac-f410-479f-9845-5d506131ef23 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Refreshing network info cache for port a6efe4ab-2a26-46aa-8bf2-3dda99ea238c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 10 10:16:33 compute-1 nova_compute[235132]: 2025-10-10 10:16:33.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:34 compute-1 ceph-mon[79167]: pgmap v918: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 288 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 10 10:16:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:34.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:34.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:34 compute-1 nova_compute[235132]: 2025-10-10 10:16:34.888 2 DEBUG nova.network.neutron [req-2a4021bd-f49f-468a-bc14-6b20c899b36a req-5b94a1ac-f410-479f-9845-5d506131ef23 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updated VIF entry in instance network info cache for port a6efe4ab-2a26-46aa-8bf2-3dda99ea238c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 10 10:16:34 compute-1 nova_compute[235132]: 2025-10-10 10:16:34.888 2 DEBUG nova.network.neutron [req-2a4021bd-f49f-468a-bc14-6b20c899b36a req-5b94a1ac-f410-479f-9845-5d506131ef23 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updating instance_info_cache with network_info: [{"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "address": "fa:16:3e:ae:b1:45", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6efe4ab-2a", "ovs_interfaceid": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:16:34 compute-1 nova_compute[235132]: 2025-10-10 10:16:34.914 2 DEBUG oslo_concurrency.lockutils [req-2a4021bd-f49f-468a-bc14-6b20c899b36a req-5b94a1ac-f410-479f-9845-5d506131ef23 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:16:35 compute-1 nova_compute[235132]: 2025-10-10 10:16:35.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:36 compute-1 ceph-mon[79167]: pgmap v919: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 284 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 10 10:16:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:36.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:36.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:16:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:16:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:16:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:16:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:16:38 compute-1 nova_compute[235132]: 2025-10-10 10:16:38.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:38 compute-1 ceph-mon[79167]: pgmap v920: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 284 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 10 10:16:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:38.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:38.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:39 compute-1 sudo[245179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:16:39 compute-1 sudo[245179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:16:39 compute-1 sudo[245179]: pam_unix(sudo:session): session closed for user root
Oct 10 10:16:40 compute-1 ceph-mon[79167]: pgmap v921: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 285 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 10 10:16:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:40.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:40 compute-1 nova_compute[235132]: 2025-10-10 10:16:40.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:40.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:42 compute-1 ceph-mon[79167]: pgmap v922: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 15 KiB/s wr, 1 op/s
Oct 10 10:16:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:16:42.213 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:16:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:16:42.214 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:16:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:16:42.214 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:16:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:16:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:42.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:16:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:16:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:42.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:16:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:16:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:16:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:16:43 compute-1 nova_compute[235132]: 2025-10-10 10:16:43.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:44 compute-1 ceph-mon[79167]: pgmap v923: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 25 KiB/s wr, 3 op/s
Oct 10 10:16:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:44.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:44.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:45 compute-1 nova_compute[235132]: 2025-10-10 10:16:45.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:45 compute-1 podman[245208]: 2025-10-10 10:16:45.988932215 +0000 UTC m=+0.085968057 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_metadata_agent)
Oct 10 10:16:46 compute-1 ceph-mon[79167]: pgmap v924: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 2 op/s
Oct 10 10:16:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:16:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:46.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:16:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:16:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:46.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:16:47 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:16:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:16:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:16:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:16:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:16:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:16:48 compute-1 nova_compute[235132]: 2025-10-10 10:16:48.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:16:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:48.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:16:48 compute-1 ceph-mon[79167]: pgmap v925: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 2 op/s
Oct 10 10:16:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:16:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:48.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:16:49 compute-1 sudo[245228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:16:49 compute-1 sudo[245228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:16:49 compute-1 sudo[245228]: pam_unix(sudo:session): session closed for user root
Oct 10 10:16:49 compute-1 sudo[245253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:16:49 compute-1 sudo[245253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:16:50 compute-1 sudo[245253]: pam_unix(sudo:session): session closed for user root
Oct 10 10:16:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:50.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:50 compute-1 ceph-mon[79167]: pgmap v926: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 15 KiB/s wr, 3 op/s
Oct 10 10:16:50 compute-1 nova_compute[235132]: 2025-10-10 10:16:50.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:16:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:50.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:16:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:16:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:52.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:16:52 compute-1 ceph-mon[79167]: pgmap v927: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 13 KiB/s wr, 2 op/s
Oct 10 10:16:52 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:16:52 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:16:52 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:16:52 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:16:52 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:16:52 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:16:52 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:16:52 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:16:52 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:16:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:16:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:52.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:16:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:16:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:16:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:16:53 compute-1 nova_compute[235132]: 2025-10-10 10:16:53.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:53 compute-1 nova_compute[235132]: 2025-10-10 10:16:53.971 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:16:53 compute-1 nova_compute[235132]: 2025-10-10 10:16:53.972 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:16:53 compute-1 nova_compute[235132]: 2025-10-10 10:16:53.972 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:16:54 compute-1 nova_compute[235132]: 2025-10-10 10:16:54.040 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:16:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:54.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:54 compute-1 ceph-mon[79167]: pgmap v928: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 14 KiB/s wr, 3 op/s
Oct 10 10:16:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:16:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:54.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:16:55 compute-1 nova_compute[235132]: 2025-10-10 10:16:55.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:16:55 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/3908480547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:16:55 compute-1 nova_compute[235132]: 2025-10-10 10:16:55.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:56 compute-1 nova_compute[235132]: 2025-10-10 10:16:56.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:16:56 compute-1 nova_compute[235132]: 2025-10-10 10:16:56.044 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:16:56 compute-1 nova_compute[235132]: 2025-10-10 10:16:56.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:16:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:56.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:56 compute-1 ceph-mon[79167]: pgmap v929: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.3 KiB/s wr, 1 op/s
Oct 10 10:16:56 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2955662946' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:16:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:56.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:56 compute-1 nova_compute[235132]: 2025-10-10 10:16:56.880 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:16:56 compute-1 nova_compute[235132]: 2025-10-10 10:16:56.881 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquired lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:16:56 compute-1 nova_compute[235132]: 2025-10-10 10:16:56.881 2 DEBUG nova.network.neutron [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 10 10:16:56 compute-1 nova_compute[235132]: 2025-10-10 10:16:56.882 2 DEBUG nova.objects.instance [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lazy-loading 'info_cache' on Instance uuid bd82d620-e0e5-4fb1-b8a5-973cefbcd107 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:16:56 compute-1 sudo[245315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:16:56 compute-1 sudo[245315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:16:56 compute-1 sudo[245315]: pam_unix(sudo:session): session closed for user root
Oct 10 10:16:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:16:57 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:16:57 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:16:57 compute-1 ceph-mon[79167]: pgmap v930: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.3 KiB/s wr, 1 op/s
Oct 10 10:16:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:16:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:16:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:16:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:16:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:16:58 compute-1 nova_compute[235132]: 2025-10-10 10:16:58.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:16:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:16:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:16:58.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:16:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:16:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:16:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:16:58.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:16:58 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2386263607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:16:59 compute-1 sudo[245341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:16:59 compute-1 sudo[245341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:16:59 compute-1 sudo[245341]: pam_unix(sudo:session): session closed for user root
Oct 10 10:16:59 compute-1 podman[245366]: 2025-10-10 10:16:59.382240346 +0000 UTC m=+0.060149380 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 10 10:16:59 compute-1 podman[245365]: 2025-10-10 10:16:59.43418575 +0000 UTC m=+0.105152274 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:16:59 compute-1 podman[245367]: 2025-10-10 10:16:59.440917524 +0000 UTC m=+0.106423728 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 10 10:16:59 compute-1 ceph-mon[79167]: pgmap v931: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 7.7 KiB/s wr, 2 op/s
Oct 10 10:16:59 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/854478764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:17:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:17:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:00.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:17:00 compute-1 nova_compute[235132]: 2025-10-10 10:17:00.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:17:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:00.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:17:00 compute-1 nova_compute[235132]: 2025-10-10 10:17:00.921 2 DEBUG nova.network.neutron [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updating instance_info_cache with network_info: [{"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "address": "fa:16:3e:ae:b1:45", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6efe4ab-2a", "ovs_interfaceid": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:17:00 compute-1 nova_compute[235132]: 2025-10-10 10:17:00.942 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Releasing lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:17:00 compute-1 nova_compute[235132]: 2025-10-10 10:17:00.943 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 10 10:17:00 compute-1 nova_compute[235132]: 2025-10-10 10:17:00.944 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:17:00 compute-1 nova_compute[235132]: 2025-10-10 10:17:00.944 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:17:00 compute-1 nova_compute[235132]: 2025-10-10 10:17:00.944 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:17:00 compute-1 nova_compute[235132]: 2025-10-10 10:17:00.969 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:17:00 compute-1 nova_compute[235132]: 2025-10-10 10:17:00.970 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:17:00 compute-1 nova_compute[235132]: 2025-10-10 10:17:00.970 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:17:00 compute-1 nova_compute[235132]: 2025-10-10 10:17:00.970 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:17:00 compute-1 nova_compute[235132]: 2025-10-10 10:17:00.970 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:17:01 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:17:01 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/867703427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:17:01 compute-1 nova_compute[235132]: 2025-10-10 10:17:01.439 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:17:01 compute-1 nova_compute[235132]: 2025-10-10 10:17:01.516 2 DEBUG nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 10 10:17:01 compute-1 nova_compute[235132]: 2025-10-10 10:17:01.517 2 DEBUG nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 10 10:17:01 compute-1 nova_compute[235132]: 2025-10-10 10:17:01.714 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:17:01 compute-1 nova_compute[235132]: 2025-10-10 10:17:01.715 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4736MB free_disk=59.89699172973633GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:17:01 compute-1 nova_compute[235132]: 2025-10-10 10:17:01.715 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:17:01 compute-1 nova_compute[235132]: 2025-10-10 10:17:01.715 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:17:01 compute-1 nova_compute[235132]: 2025-10-10 10:17:01.783 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Instance bd82d620-e0e5-4fb1-b8a5-973cefbcd107 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 10 10:17:01 compute-1 nova_compute[235132]: 2025-10-10 10:17:01.784 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:17:01 compute-1 nova_compute[235132]: 2025-10-10 10:17:01.784 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:17:01 compute-1 nova_compute[235132]: 2025-10-10 10:17:01.826 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:17:02 compute-1 ceph-mon[79167]: pgmap v932: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.3 KiB/s wr, 1 op/s
Oct 10 10:17:02 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:17:02 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/867703427' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:17:02 compute-1 ovn_controller[131749]: 2025-10-10T10:17:02Z|00079|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 10 10:17:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:02.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:17:02 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1641573569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:17:02 compute-1 nova_compute[235132]: 2025-10-10 10:17:02.293 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:17:02 compute-1 nova_compute[235132]: 2025-10-10 10:17:02.299 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:17:02 compute-1 nova_compute[235132]: 2025-10-10 10:17:02.317 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:17:02 compute-1 nova_compute[235132]: 2025-10-10 10:17:02.320 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:17:02 compute-1 nova_compute[235132]: 2025-10-10 10:17:02.321 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:17:02 compute-1 nova_compute[235132]: 2025-10-10 10:17:02.420 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:17:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:17:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:17:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:02.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:17:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:17:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:17:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:17:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:17:03 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1641573569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:17:03 compute-1 nova_compute[235132]: 2025-10-10 10:17:03.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:04 compute-1 ceph-mon[79167]: pgmap v933: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 4.3 KiB/s wr, 2 op/s
Oct 10 10:17:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:04.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0.
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:17:04.410275) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091424410379, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1676, "num_deletes": 257, "total_data_size": 4231648, "memory_usage": 4297216, "flush_reason": "Manual Compaction"}
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091424427122, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 2743362, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28342, "largest_seqno": 30013, "table_properties": {"data_size": 2736402, "index_size": 3967, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14641, "raw_average_key_size": 19, "raw_value_size": 2722384, "raw_average_value_size": 3634, "num_data_blocks": 174, "num_entries": 749, "num_filter_entries": 749, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760091288, "oldest_key_time": 1760091288, "file_creation_time": 1760091424, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 16903 microseconds, and 9039 cpu microseconds.
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:17:04.427183) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 2743362 bytes OK
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:17:04.427211) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:17:04.428888) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:17:04.428910) EVENT_LOG_v1 {"time_micros": 1760091424428903, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:17:04.428931) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 4223883, prev total WAL file size 4223883, number of live WAL files 2.
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:17:04.430801) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353032' seq:72057594037927935, type:22 .. '6C6F676D00373535' seq:0, type:0; will stop at (end)
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(2679KB)], [54(13MB)]
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091424430850, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 17069532, "oldest_snapshot_seqno": -1}
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 6031 keys, 16925428 bytes, temperature: kUnknown
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091424518432, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 16925428, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16882185, "index_size": 27069, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15109, "raw_key_size": 153432, "raw_average_key_size": 25, "raw_value_size": 16770537, "raw_average_value_size": 2780, "num_data_blocks": 1111, "num_entries": 6031, "num_filter_entries": 6031, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760091424, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:17:04.519134) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 16925428 bytes
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:17:04.520729) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 193.9 rd, 192.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 13.7 +0.0 blob) out(16.1 +0.0 blob), read-write-amplify(12.4) write-amplify(6.2) OK, records in: 6563, records dropped: 532 output_compression: NoCompression
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:17:04.520759) EVENT_LOG_v1 {"time_micros": 1760091424520746, "job": 32, "event": "compaction_finished", "compaction_time_micros": 88051, "compaction_time_cpu_micros": 59141, "output_level": 6, "num_output_files": 1, "total_output_size": 16925428, "num_input_records": 6563, "num_output_records": 6031, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091424522215, "job": 32, "event": "table_file_deletion", "file_number": 56}
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091424527680, "job": 32, "event": "table_file_deletion", "file_number": 54}
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:17:04.430701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:17:04.527806) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:17:04.527814) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:17:04.527817) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:17:04.527820) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:17:04 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:17:04.527822) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:17:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:17:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:04.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:17:05 compute-1 ceph-mon[79167]: pgmap v934: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 3.3 KiB/s wr, 1 op/s
Oct 10 10:17:05 compute-1 nova_compute[235132]: 2025-10-10 10:17:05.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:06.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:17:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:06.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:17:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:17:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:17:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:17:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:17:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:17:08 compute-1 ceph-mon[79167]: pgmap v935: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 3.3 KiB/s wr, 1 op/s
Oct 10 10:17:08 compute-1 nova_compute[235132]: 2025-10-10 10:17:08.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:17:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:08.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:17:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:17:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:08.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:17:10 compute-1 ceph-mon[79167]: pgmap v936: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 108 KiB/s rd, 5.3 KiB/s wr, 179 op/s
Oct 10 10:17:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:10.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:10 compute-1 nova_compute[235132]: 2025-10-10 10:17:10.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:10.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:12 compute-1 ceph-mon[79167]: pgmap v937: 353 pgs: 353 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 2.0 KiB/s wr, 178 op/s
Oct 10 10:17:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:12.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:17:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:12.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:17:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:17:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:17:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:17:13 compute-1 nova_compute[235132]: 2025-10-10 10:17:13.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:13 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:17:13.206 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:17:13 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:17:13.206 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 10 10:17:13 compute-1 nova_compute[235132]: 2025-10-10 10:17:13.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:14 compute-1 ceph-mon[79167]: pgmap v938: 353 pgs: 353 active+clean; 121 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 4.2 KiB/s wr, 207 op/s
Oct 10 10:17:14 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2478587300' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:17:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:14.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:14.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:15 compute-1 nova_compute[235132]: 2025-10-10 10:17:15.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:16 compute-1 ceph-mon[79167]: pgmap v939: 353 pgs: 353 active+clean; 121 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 4.2 KiB/s wr, 206 op/s
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.225 2 DEBUG oslo_concurrency.lockutils [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "interface-bd82d620-e0e5-4fb1-b8a5-973cefbcd107-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.226 2 DEBUG oslo_concurrency.lockutils [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "interface-bd82d620-e0e5-4fb1-b8a5-973cefbcd107-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.245 2 DEBUG nova.objects.instance [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'flavor' on Instance uuid bd82d620-e0e5-4fb1-b8a5-973cefbcd107 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.269 2 DEBUG nova.virt.libvirt.vif [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:15:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-217348562',display_name='tempest-TestNetworkBasicOps-server-217348562',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-217348562',id=6,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHLzdAHDdUURWXD0NwbnzkKciFKEQ2omFqKVpiZUQ/jQkwx0IlaJ48FUTUghTozEFkbgWKl3XHIfnAKs6ai2Am8DZErVGD6iO1tzsuGiO5n1KsYJdS5ZP3lMvRFTeABsRg==',key_name='tempest-TestNetworkBasicOps-1625432950',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:15:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-qt8amg0k',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:15:28Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=bd82d620-e0e5-4fb1-b8a5-973cefbcd107,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "address": "fa:16:3e:ae:b1:45", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6efe4ab-2a", "ovs_interfaceid": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.269 2 DEBUG nova.network.os_vif_util [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "address": "fa:16:3e:ae:b1:45", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6efe4ab-2a", "ovs_interfaceid": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.270 2 DEBUG nova.network.os_vif_util [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ae:b1:45,bridge_name='br-int',has_traffic_filtering=True,id=a6efe4ab-2a26-46aa-8bf2-3dda99ea238c,network=Network(87f6394d-4290-4eca-8ba0-18711f3ad6e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6efe4ab-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.274 2 DEBUG nova.virt.libvirt.guest [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ae:b1:45"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6efe4ab-2a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.278 2 DEBUG nova.virt.libvirt.guest [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ae:b1:45"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6efe4ab-2a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.282 2 DEBUG nova.virt.libvirt.driver [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Attempting to detach device tapa6efe4ab-2a from instance bd82d620-e0e5-4fb1-b8a5-973cefbcd107 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.283 2 DEBUG nova.virt.libvirt.guest [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] detach device xml: <interface type="ethernet">
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <mac address="fa:16:3e:ae:b1:45"/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <model type="virtio"/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <driver name="vhost" rx_queue_size="512"/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <mtu size="1442"/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <target dev="tapa6efe4ab-2a"/>
Oct 10 10:17:16 compute-1 nova_compute[235132]: </interface>
Oct 10 10:17:16 compute-1 nova_compute[235132]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 10 10:17:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:16.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.292 2 DEBUG nova.virt.libvirt.guest [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ae:b1:45"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6efe4ab-2a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.297 2 DEBUG nova.virt.libvirt.guest [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:ae:b1:45"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6efe4ab-2a"/></interface>not found in domain: <domain type='kvm' id='4'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <name>instance-00000006</name>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <uuid>bd82d620-e0e5-4fb1-b8a5-973cefbcd107</uuid>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <metadata>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <nova:name>tempest-TestNetworkBasicOps-server-217348562</nova:name>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <nova:creationTime>2025-10-10 10:15:55</nova:creationTime>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <nova:flavor name="m1.nano">
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <nova:memory>128</nova:memory>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <nova:disk>1</nova:disk>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <nova:swap>0</nova:swap>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <nova:ephemeral>0</nova:ephemeral>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <nova:vcpus>1</nova:vcpus>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   </nova:flavor>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <nova:owner>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   </nova:owner>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <nova:ports>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <nova:port uuid="562e8418-d47e-4fd1-8a23-094e0ce40097">
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </nova:port>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <nova:port uuid="a6efe4ab-2a26-46aa-8bf2-3dda99ea238c">
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </nova:port>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   </nova:ports>
Oct 10 10:17:16 compute-1 nova_compute[235132]: </nova:instance>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   </metadata>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <memory unit='KiB'>131072</memory>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <vcpu placement='static'>1</vcpu>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <resource>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <partition>/machine</partition>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   </resource>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <sysinfo type='smbios'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <system>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <entry name='manufacturer'>RDO</entry>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <entry name='product'>OpenStack Compute</entry>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <entry name='serial'>bd82d620-e0e5-4fb1-b8a5-973cefbcd107</entry>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <entry name='uuid'>bd82d620-e0e5-4fb1-b8a5-973cefbcd107</entry>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <entry name='family'>Virtual Machine</entry>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </system>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   </sysinfo>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <os>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <boot dev='hd'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <smbios mode='sysinfo'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   </os>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <features>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <acpi/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <apic/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <vmcoreinfo state='on'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   </features>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <cpu mode='custom' match='exact' check='full'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <vendor>AMD</vendor>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='x2apic'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='tsc-deadline'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='hypervisor'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='tsc_adjust'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='spec-ctrl'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='stibp'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='arch-capabilities'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='ssbd'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='cmp_legacy'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='overflow-recov'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='succor'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='ibrs'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='amd-ssbd'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='virt-ssbd'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='disable' name='lbrv'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='disable' name='tsc-scale'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='disable' name='vmcb-clean'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='disable' name='flushbyasid'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='disable' name='pause-filter'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='disable' name='pfthreshold'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='rdctl-no'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='mds-no'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='pschange-mc-no'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='gds-no'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='rfds-no'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='disable' name='xsaves'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='disable' name='svm'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='topoext'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='disable' name='npt'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='disable' name='nrip-save'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   </cpu>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <clock offset='utc'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <timer name='pit' tickpolicy='delay'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <timer name='hpet' present='no'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   </clock>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <on_poweroff>destroy</on_poweroff>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <on_reboot>restart</on_reboot>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <on_crash>destroy</on_crash>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <devices>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <disk type='network' device='disk'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <driver name='qemu' type='raw' cache='none'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <auth username='openstack'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:         <secret type='ceph' uuid='21f084a3-af34-5230-afe4-ea5cd24a55f4'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       </auth>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <source protocol='rbd' name='vms/bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk' index='2'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:         <host name='192.168.122.100' port='6789'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:         <host name='192.168.122.102' port='6789'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:         <host name='192.168.122.101' port='6789'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       </source>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target dev='vda' bus='virtio'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='virtio-disk0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </disk>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <disk type='network' device='cdrom'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <driver name='qemu' type='raw' cache='none'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <auth username='openstack'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:         <secret type='ceph' uuid='21f084a3-af34-5230-afe4-ea5cd24a55f4'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       </auth>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <source protocol='rbd' name='vms/bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk.config' index='1'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:         <host name='192.168.122.100' port='6789'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:         <host name='192.168.122.102' port='6789'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:         <host name='192.168.122.101' port='6789'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       </source>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target dev='sda' bus='sata'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <readonly/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='sata0-0-0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </disk>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='0' model='pcie-root'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pcie.0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='1' port='0x10'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.1'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='2' port='0x11'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.2'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='3' port='0x12'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.3'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='4' port='0x13'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.4'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='5' port='0x14'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.5'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='6' port='0x15'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.6'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='7' port='0x16'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.7'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='8' port='0x17'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.8'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='9' port='0x18'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.9'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='10' port='0x19'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.10'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='11' port='0x1a'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.11'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='12' port='0x1b'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.12'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='13' port='0x1c'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.13'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='14' port='0x1d'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.14'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='15' port='0x1e'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.15'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='16' port='0x1f'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.16'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='17' port='0x20'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.17'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='18' port='0x21'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.18'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='19' port='0x22'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.19'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='20' port='0x23'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.20'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='21' port='0x24'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.21'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='22' port='0x25'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.22'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='23' port='0x26'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.23'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='24' port='0x27'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.24'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='25' port='0x28'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.25'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-pci-bridge'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.26'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='usb'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='sata' index='0'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='ide'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <interface type='ethernet'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <mac address='fa:16:3e:73:fc:1f'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target dev='tap562e8418-d4'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model type='virtio'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <driver name='vhost' rx_queue_size='512'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <mtu size='1442'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='net0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </interface>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <interface type='ethernet'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <mac address='fa:16:3e:ae:b1:45'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target dev='tapa6efe4ab-2a'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model type='virtio'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <driver name='vhost' rx_queue_size='512'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <mtu size='1442'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='net1'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </interface>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <serial type='pty'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <source path='/dev/pts/0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <log file='/var/lib/nova/instances/bd82d620-e0e5-4fb1-b8a5-973cefbcd107/console.log' append='off'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target type='isa-serial' port='0'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:         <model name='isa-serial'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       </target>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='serial0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </serial>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <console type='pty' tty='/dev/pts/0'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <source path='/dev/pts/0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <log file='/var/lib/nova/instances/bd82d620-e0e5-4fb1-b8a5-973cefbcd107/console.log' append='off'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target type='serial' port='0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='serial0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </console>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <input type='tablet' bus='usb'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='input0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='usb' bus='0' port='1'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </input>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <input type='mouse' bus='ps2'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='input1'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </input>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <input type='keyboard' bus='ps2'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='input2'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </input>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <listen type='address' address='::0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </graphics>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <audio id='1' type='none'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <video>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model type='virtio' heads='1' primary='yes'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='video0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </video>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <watchdog model='itco' action='reset'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='watchdog0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </watchdog>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <memballoon model='virtio'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <stats period='10'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='balloon0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </memballoon>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <rng model='virtio'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <backend model='random'>/dev/urandom</backend>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='rng0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </rng>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   </devices>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <label>system_u:system_r:svirt_t:s0:c141,c952</label>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c141,c952</imagelabel>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   </seclabel>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <label>+107:+107</label>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <imagelabel>+107:+107</imagelabel>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   </seclabel>
Oct 10 10:17:16 compute-1 nova_compute[235132]: </domain>
Oct 10 10:17:16 compute-1 nova_compute[235132]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.299 2 INFO nova.virt.libvirt.driver [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully detached device tapa6efe4ab-2a from instance bd82d620-e0e5-4fb1-b8a5-973cefbcd107 from the persistent domain config.
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.300 2 DEBUG nova.virt.libvirt.driver [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] (1/8): Attempting to detach device tapa6efe4ab-2a with device alias net1 from instance bd82d620-e0e5-4fb1-b8a5-973cefbcd107 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.301 2 DEBUG nova.virt.libvirt.guest [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] detach device xml: <interface type="ethernet">
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <mac address="fa:16:3e:ae:b1:45"/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <model type="virtio"/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <driver name="vhost" rx_queue_size="512"/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <mtu size="1442"/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <target dev="tapa6efe4ab-2a"/>
Oct 10 10:17:16 compute-1 nova_compute[235132]: </interface>
Oct 10 10:17:16 compute-1 nova_compute[235132]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 10 10:17:16 compute-1 kernel: tapa6efe4ab-2a (unregistering): left promiscuous mode
Oct 10 10:17:16 compute-1 NetworkManager[44982]: <info>  [1760091436.4164] device (tapa6efe4ab-2a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:16 compute-1 ovn_controller[131749]: 2025-10-10T10:17:16Z|00080|binding|INFO|Releasing lport a6efe4ab-2a26-46aa-8bf2-3dda99ea238c from this chassis (sb_readonly=0)
Oct 10 10:17:16 compute-1 ovn_controller[131749]: 2025-10-10T10:17:16Z|00081|binding|INFO|Setting lport a6efe4ab-2a26-46aa-8bf2-3dda99ea238c down in Southbound
Oct 10 10:17:16 compute-1 ovn_controller[131749]: 2025-10-10T10:17:16Z|00082|binding|INFO|Removing iface tapa6efe4ab-2a ovn-installed in OVS
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:16 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:17:16.442 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ae:b1:45 10.100.0.29', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.29/28', 'neutron:device_id': 'bd82d620-e0e5-4fb1-b8a5-973cefbcd107', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87f6394d-4290-4eca-8ba0-18711f3ad6e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=daddf600-eff8-433f-97e5-f9a5bf5367ce, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], logical_port=a6efe4ab-2a26-46aa-8bf2-3dda99ea238c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:17:16 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:17:16.443 141156 INFO neutron.agent.ovn.metadata.agent [-] Port a6efe4ab-2a26-46aa-8bf2-3dda99ea238c in datapath 87f6394d-4290-4eca-8ba0-18711f3ad6e0 unbound from our chassis
Oct 10 10:17:16 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:17:16.445 141156 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 87f6394d-4290-4eca-8ba0-18711f3ad6e0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 10 10:17:16 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:17:16.446 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[0d472758-0710-4bbf-a1ea-5948124b9529]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:16 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:17:16.446 141156 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0 namespace which is not needed anymore
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.472 2 DEBUG nova.virt.libvirt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Received event <DeviceRemovedEvent: 1760091436.4693735, bd82d620-e0e5-4fb1-b8a5-973cefbcd107 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.472 2 DEBUG nova.virt.libvirt.driver [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Start waiting for the detach event from libvirt for device tapa6efe4ab-2a with device alias net1 for instance bd82d620-e0e5-4fb1-b8a5-973cefbcd107 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.473 2 DEBUG nova.virt.libvirt.guest [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ae:b1:45"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6efe4ab-2a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.482 2 DEBUG nova.virt.libvirt.guest [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:ae:b1:45"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6efe4ab-2a"/></interface>not found in domain: <domain type='kvm' id='4'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <name>instance-00000006</name>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <uuid>bd82d620-e0e5-4fb1-b8a5-973cefbcd107</uuid>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <metadata>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <nova:name>tempest-TestNetworkBasicOps-server-217348562</nova:name>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <nova:creationTime>2025-10-10 10:15:55</nova:creationTime>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <nova:flavor name="m1.nano">
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <nova:memory>128</nova:memory>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <nova:disk>1</nova:disk>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <nova:swap>0</nova:swap>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <nova:ephemeral>0</nova:ephemeral>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <nova:vcpus>1</nova:vcpus>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   </nova:flavor>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <nova:owner>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   </nova:owner>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <nova:ports>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <nova:port uuid="562e8418-d47e-4fd1-8a23-094e0ce40097">
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </nova:port>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <nova:port uuid="a6efe4ab-2a26-46aa-8bf2-3dda99ea238c">
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </nova:port>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   </nova:ports>
Oct 10 10:17:16 compute-1 nova_compute[235132]: </nova:instance>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   </metadata>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <memory unit='KiB'>131072</memory>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <vcpu placement='static'>1</vcpu>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <resource>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <partition>/machine</partition>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   </resource>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <sysinfo type='smbios'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <system>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <entry name='manufacturer'>RDO</entry>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <entry name='product'>OpenStack Compute</entry>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <entry name='serial'>bd82d620-e0e5-4fb1-b8a5-973cefbcd107</entry>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <entry name='uuid'>bd82d620-e0e5-4fb1-b8a5-973cefbcd107</entry>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <entry name='family'>Virtual Machine</entry>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </system>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   </sysinfo>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <os>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <boot dev='hd'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <smbios mode='sysinfo'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   </os>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <features>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <acpi/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <apic/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <vmcoreinfo state='on'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   </features>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <cpu mode='custom' match='exact' check='full'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <model fallback='forbid'>EPYC-Rome</model>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <vendor>AMD</vendor>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='x2apic'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='tsc-deadline'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='hypervisor'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='tsc_adjust'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='spec-ctrl'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='stibp'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='arch-capabilities'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='ssbd'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='cmp_legacy'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='overflow-recov'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='succor'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='ibrs'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='amd-ssbd'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='virt-ssbd'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='disable' name='lbrv'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='disable' name='tsc-scale'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='disable' name='vmcb-clean'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='disable' name='flushbyasid'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='disable' name='pause-filter'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='disable' name='pfthreshold'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='rdctl-no'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='mds-no'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='pschange-mc-no'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='gds-no'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='rfds-no'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='disable' name='xsaves'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='disable' name='svm'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='require' name='topoext'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='disable' name='npt'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <feature policy='disable' name='nrip-save'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   </cpu>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <clock offset='utc'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <timer name='pit' tickpolicy='delay'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <timer name='hpet' present='no'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   </clock>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <on_poweroff>destroy</on_poweroff>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <on_reboot>restart</on_reboot>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <on_crash>destroy</on_crash>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <devices>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <disk type='network' device='disk'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <driver name='qemu' type='raw' cache='none'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <auth username='openstack'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:         <secret type='ceph' uuid='21f084a3-af34-5230-afe4-ea5cd24a55f4'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       </auth>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <source protocol='rbd' name='vms/bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk' index='2'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:         <host name='192.168.122.100' port='6789'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:         <host name='192.168.122.102' port='6789'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:         <host name='192.168.122.101' port='6789'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       </source>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target dev='vda' bus='virtio'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='virtio-disk0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </disk>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <disk type='network' device='cdrom'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <driver name='qemu' type='raw' cache='none'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <auth username='openstack'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:         <secret type='ceph' uuid='21f084a3-af34-5230-afe4-ea5cd24a55f4'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       </auth>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <source protocol='rbd' name='vms/bd82d620-e0e5-4fb1-b8a5-973cefbcd107_disk.config' index='1'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:         <host name='192.168.122.100' port='6789'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:         <host name='192.168.122.102' port='6789'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:         <host name='192.168.122.101' port='6789'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       </source>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target dev='sda' bus='sata'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <readonly/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='sata0-0-0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </disk>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='0' model='pcie-root'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pcie.0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='1' port='0x10'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.1'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='2' port='0x11'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.2'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='3' port='0x12'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.3'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='4' port='0x13'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.4'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='5' port='0x14'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.5'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='6' port='0x15'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.6'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='7' port='0x16'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.7'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='8' port='0x17'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.8'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='9' port='0x18'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.9'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='10' port='0x19'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.10'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='11' port='0x1a'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.11'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='12' port='0x1b'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.12'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='13' port='0x1c'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.13'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='14' port='0x1d'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.14'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='15' port='0x1e'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.15'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='16' port='0x1f'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.16'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='17' port='0x20'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.17'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='18' port='0x21'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.18'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='19' port='0x22'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.19'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='20' port='0x23'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.20'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='21' port='0x24'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.21'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='22' port='0x25'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.22'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='23' port='0x26'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.23'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='24' port='0x27'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.24'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-root-port'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target chassis='25' port='0x28'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.25'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model name='pcie-pci-bridge'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='pci.26'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='usb'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <controller type='sata' index='0'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='ide'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </controller>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <interface type='ethernet'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <mac address='fa:16:3e:73:fc:1f'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target dev='tap562e8418-d4'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model type='virtio'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <driver name='vhost' rx_queue_size='512'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <mtu size='1442'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='net0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </interface>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <serial type='pty'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <source path='/dev/pts/0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <log file='/var/lib/nova/instances/bd82d620-e0e5-4fb1-b8a5-973cefbcd107/console.log' append='off'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target type='isa-serial' port='0'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:         <model name='isa-serial'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       </target>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='serial0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </serial>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <console type='pty' tty='/dev/pts/0'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <source path='/dev/pts/0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <log file='/var/lib/nova/instances/bd82d620-e0e5-4fb1-b8a5-973cefbcd107/console.log' append='off'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <target type='serial' port='0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='serial0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </console>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <input type='tablet' bus='usb'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='input0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='usb' bus='0' port='1'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </input>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <input type='mouse' bus='ps2'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='input1'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </input>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <input type='keyboard' bus='ps2'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='input2'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </input>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <listen type='address' address='::0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </graphics>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <audio id='1' type='none'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <video>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <model type='virtio' heads='1' primary='yes'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='video0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </video>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <watchdog model='itco' action='reset'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='watchdog0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </watchdog>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <memballoon model='virtio'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <stats period='10'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='balloon0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </memballoon>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <rng model='virtio'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <backend model='random'>/dev/urandom</backend>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <alias name='rng0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </rng>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   </devices>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <label>system_u:system_r:svirt_t:s0:c141,c952</label>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c141,c952</imagelabel>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   </seclabel>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <label>+107:+107</label>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <imagelabel>+107:+107</imagelabel>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   </seclabel>
Oct 10 10:17:16 compute-1 nova_compute[235132]: </domain>
Oct 10 10:17:16 compute-1 nova_compute[235132]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.483 2 INFO nova.virt.libvirt.driver [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully detached device tapa6efe4ab-2a from instance bd82d620-e0e5-4fb1-b8a5-973cefbcd107 from the live domain config.
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.484 2 DEBUG nova.virt.libvirt.vif [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:15:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-217348562',display_name='tempest-TestNetworkBasicOps-server-217348562',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-217348562',id=6,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHLzdAHDdUURWXD0NwbnzkKciFKEQ2omFqKVpiZUQ/jQkwx0IlaJ48FUTUghTozEFkbgWKl3XHIfnAKs6ai2Am8DZErVGD6iO1tzsuGiO5n1KsYJdS5ZP3lMvRFTeABsRg==',key_name='tempest-TestNetworkBasicOps-1625432950',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:15:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-qt8amg0k',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:15:28Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=bd82d620-e0e5-4fb1-b8a5-973cefbcd107,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "address": "fa:16:3e:ae:b1:45", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6efe4ab-2a", "ovs_interfaceid": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.484 2 DEBUG nova.network.os_vif_util [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "address": "fa:16:3e:ae:b1:45", "network": {"id": "87f6394d-4290-4eca-8ba0-18711f3ad6e0", "bridge": "br-int", "label": "tempest-network-smoke--1629699660", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6efe4ab-2a", "ovs_interfaceid": "a6efe4ab-2a26-46aa-8bf2-3dda99ea238c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.485 2 DEBUG nova.network.os_vif_util [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ae:b1:45,bridge_name='br-int',has_traffic_filtering=True,id=a6efe4ab-2a26-46aa-8bf2-3dda99ea238c,network=Network(87f6394d-4290-4eca-8ba0-18711f3ad6e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6efe4ab-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.486 2 DEBUG os_vif [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:b1:45,bridge_name='br-int',has_traffic_filtering=True,id=a6efe4ab-2a26-46aa-8bf2-3dda99ea238c,network=Network(87f6394d-4290-4eca-8ba0-18711f3ad6e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6efe4ab-2a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.490 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa6efe4ab-2a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.497 2 INFO os_vif [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ae:b1:45,bridge_name='br-int',has_traffic_filtering=True,id=a6efe4ab-2a26-46aa-8bf2-3dda99ea238c,network=Network(87f6394d-4290-4eca-8ba0-18711f3ad6e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6efe4ab-2a')
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.497 2 DEBUG nova.virt.libvirt.guest [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <nova:name>tempest-TestNetworkBasicOps-server-217348562</nova:name>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <nova:creationTime>2025-10-10 10:17:16</nova:creationTime>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <nova:flavor name="m1.nano">
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <nova:memory>128</nova:memory>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <nova:disk>1</nova:disk>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <nova:swap>0</nova:swap>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <nova:ephemeral>0</nova:ephemeral>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <nova:vcpus>1</nova:vcpus>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   </nova:flavor>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <nova:owner>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   </nova:owner>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   <nova:ports>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     <nova:port uuid="562e8418-d47e-4fd1-8a23-094e0ce40097">
Oct 10 10:17:16 compute-1 nova_compute[235132]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 10 10:17:16 compute-1 nova_compute[235132]:     </nova:port>
Oct 10 10:17:16 compute-1 nova_compute[235132]:   </nova:ports>
Oct 10 10:17:16 compute-1 nova_compute[235132]: </nova:instance>
Oct 10 10:17:16 compute-1 nova_compute[235132]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 10 10:17:16 compute-1 podman[245483]: 2025-10-10 10:17:16.564501355 +0000 UTC m=+0.106834509 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 10 10:17:16 compute-1 neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0[244900]: [NOTICE]   (244904) : haproxy version is 2.8.14-c23fe91
Oct 10 10:17:16 compute-1 neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0[244900]: [NOTICE]   (244904) : path to executable is /usr/sbin/haproxy
Oct 10 10:17:16 compute-1 neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0[244900]: [WARNING]  (244904) : Exiting Master process...
Oct 10 10:17:16 compute-1 neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0[244900]: [ALERT]    (244904) : Current worker (244906) exited with code 143 (Terminated)
Oct 10 10:17:16 compute-1 neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0[244900]: [WARNING]  (244904) : All workers exited. Exiting... (0)
Oct 10 10:17:16 compute-1 systemd[1]: libpod-24470c868dec800b6afc9c007cf2c5daf050700fd0047b50978eb69f4e5cb050.scope: Deactivated successfully.
Oct 10 10:17:16 compute-1 podman[245523]: 2025-10-10 10:17:16.623113932 +0000 UTC m=+0.054495885 container died 24470c868dec800b6afc9c007cf2c5daf050700fd0047b50978eb69f4e5cb050 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 10:17:16 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-24470c868dec800b6afc9c007cf2c5daf050700fd0047b50978eb69f4e5cb050-userdata-shm.mount: Deactivated successfully.
Oct 10 10:17:16 compute-1 systemd[1]: var-lib-containers-storage-overlay-e0e01352dbf0d1bebdf46980d76e9074c40ab7243819392e4c65167a834fa151-merged.mount: Deactivated successfully.
Oct 10 10:17:16 compute-1 podman[245523]: 2025-10-10 10:17:16.668789504 +0000 UTC m=+0.100171447 container cleanup 24470c868dec800b6afc9c007cf2c5daf050700fd0047b50978eb69f4e5cb050 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 10 10:17:16 compute-1 systemd[1]: libpod-conmon-24470c868dec800b6afc9c007cf2c5daf050700fd0047b50978eb69f4e5cb050.scope: Deactivated successfully.
Oct 10 10:17:16 compute-1 podman[245554]: 2025-10-10 10:17:16.749749964 +0000 UTC m=+0.050596358 container remove 24470c868dec800b6afc9c007cf2c5daf050700fd0047b50978eb69f4e5cb050 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 10 10:17:16 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:17:16.759 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[52e347b4-fac0-4510-ab7e-ffb24ca5b0e0]: (4, ('Fri Oct 10 10:17:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0 (24470c868dec800b6afc9c007cf2c5daf050700fd0047b50978eb69f4e5cb050)\n24470c868dec800b6afc9c007cf2c5daf050700fd0047b50978eb69f4e5cb050\nFri Oct 10 10:17:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0 (24470c868dec800b6afc9c007cf2c5daf050700fd0047b50978eb69f4e5cb050)\n24470c868dec800b6afc9c007cf2c5daf050700fd0047b50978eb69f4e5cb050\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:16 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:17:16.761 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[abef3c15-6ac0-43a2-8957-f1dacda13853]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:16 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:17:16.762 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87f6394d-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:17:16 compute-1 kernel: tap87f6394d-40: left promiscuous mode
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:16.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:16 compute-1 nova_compute[235132]: 2025-10-10 10:17:16.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:16 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:17:16.792 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[79221226-4bc7-4cf9-b9f5-126616bb35f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:16 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:17:16.833 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[16e40510-e8e4-43fa-b8be-63729acc68eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:16 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:17:16.835 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[99210dc1-9962-4362-9888-3fc26e84ae9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:16 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:17:16.859 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[a334b838-be0c-4206-9f14-7e462e3a6a68]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 426943, 'reachable_time': 41874, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245569, 'error': None, 'target': 'ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:16 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:17:16.863 141275 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-87f6394d-4290-4eca-8ba0-18711f3ad6e0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 10 10:17:16 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:17:16.863 141275 DEBUG oslo.privsep.daemon [-] privsep: reply[140656c6-7a47-476b-9322-200093305d07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:16 compute-1 systemd[1]: run-netns-ovnmeta\x2d87f6394d\x2d4290\x2d4eca\x2d8ba0\x2d18711f3ad6e0.mount: Deactivated successfully.
Oct 10 10:17:17 compute-1 nova_compute[235132]: 2025-10-10 10:17:17.020 2 DEBUG nova.compute.manager [req-9ace626d-e45e-4682-984d-e4180403f77a req-e8c53912-4bca-4c31-b6be-5f29ddd19971 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received event network-vif-unplugged-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:17:17 compute-1 nova_compute[235132]: 2025-10-10 10:17:17.020 2 DEBUG oslo_concurrency.lockutils [req-9ace626d-e45e-4682-984d-e4180403f77a req-e8c53912-4bca-4c31-b6be-5f29ddd19971 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:17:17 compute-1 nova_compute[235132]: 2025-10-10 10:17:17.021 2 DEBUG oslo_concurrency.lockutils [req-9ace626d-e45e-4682-984d-e4180403f77a req-e8c53912-4bca-4c31-b6be-5f29ddd19971 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:17:17 compute-1 nova_compute[235132]: 2025-10-10 10:17:17.021 2 DEBUG oslo_concurrency.lockutils [req-9ace626d-e45e-4682-984d-e4180403f77a req-e8c53912-4bca-4c31-b6be-5f29ddd19971 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:17:17 compute-1 nova_compute[235132]: 2025-10-10 10:17:17.022 2 DEBUG nova.compute.manager [req-9ace626d-e45e-4682-984d-e4180403f77a req-e8c53912-4bca-4c31-b6be-5f29ddd19971 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] No waiting events found dispatching network-vif-unplugged-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:17:17 compute-1 nova_compute[235132]: 2025-10-10 10:17:17.022 2 WARNING nova.compute.manager [req-9ace626d-e45e-4682-984d-e4180403f77a req-e8c53912-4bca-4c31-b6be-5f29ddd19971 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received unexpected event network-vif-unplugged-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c for instance with vm_state active and task_state None.
Oct 10 10:17:17 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:17:17 compute-1 nova_compute[235132]: 2025-10-10 10:17:17.331 2 DEBUG oslo_concurrency.lockutils [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:17:17 compute-1 nova_compute[235132]: 2025-10-10 10:17:17.331 2 DEBUG oslo_concurrency.lockutils [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquired lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:17:17 compute-1 nova_compute[235132]: 2025-10-10 10:17:17.331 2 DEBUG nova.network.neutron [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 10 10:17:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:17:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:17:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:17:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:17:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:17:18 compute-1 ovn_controller[131749]: 2025-10-10T10:17:18Z|00083|binding|INFO|Releasing lport 318e6d8e-f58f-407d-854f-d27adc402b34 from this chassis (sb_readonly=0)
Oct 10 10:17:18 compute-1 nova_compute[235132]: 2025-10-10 10:17:18.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:18 compute-1 nova_compute[235132]: 2025-10-10 10:17:18.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:18 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:17:18.208 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ee0899c1-415d-4aa8-abe8-1240b4e8bf2c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:17:18 compute-1 ceph-mon[79167]: pgmap v940: 353 pgs: 353 active+clean; 121 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 4.2 KiB/s wr, 206 op/s
Oct 10 10:17:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:18.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:18 compute-1 nova_compute[235132]: 2025-10-10 10:17:18.491 2 INFO nova.network.neutron [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Port a6efe4ab-2a26-46aa-8bf2-3dda99ea238c from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 10 10:17:18 compute-1 nova_compute[235132]: 2025-10-10 10:17:18.492 2 DEBUG nova.network.neutron [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updating instance_info_cache with network_info: [{"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:17:18 compute-1 nova_compute[235132]: 2025-10-10 10:17:18.515 2 DEBUG oslo_concurrency.lockutils [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Releasing lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:17:18 compute-1 nova_compute[235132]: 2025-10-10 10:17:18.543 2 DEBUG oslo_concurrency.lockutils [None req-360ff5e0-98e0-4f32-ae04-16e530e16a8a 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "interface-bd82d620-e0e5-4fb1-b8a5-973cefbcd107-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.317s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:17:18 compute-1 nova_compute[235132]: 2025-10-10 10:17:18.728 2 DEBUG nova.compute.manager [req-2f02eaf0-ee31-4ee2-ac15-69f3779ba54d req-836d3e3a-be5e-4e45-8d66-79a012d23c45 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received event network-changed-562e8418-d47e-4fd1-8a23-094e0ce40097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:17:18 compute-1 nova_compute[235132]: 2025-10-10 10:17:18.728 2 DEBUG nova.compute.manager [req-2f02eaf0-ee31-4ee2-ac15-69f3779ba54d req-836d3e3a-be5e-4e45-8d66-79a012d23c45 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Refreshing instance network info cache due to event network-changed-562e8418-d47e-4fd1-8a23-094e0ce40097. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 10 10:17:18 compute-1 nova_compute[235132]: 2025-10-10 10:17:18.728 2 DEBUG oslo_concurrency.lockutils [req-2f02eaf0-ee31-4ee2-ac15-69f3779ba54d req-836d3e3a-be5e-4e45-8d66-79a012d23c45 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:17:18 compute-1 nova_compute[235132]: 2025-10-10 10:17:18.728 2 DEBUG oslo_concurrency.lockutils [req-2f02eaf0-ee31-4ee2-ac15-69f3779ba54d req-836d3e3a-be5e-4e45-8d66-79a012d23c45 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:17:18 compute-1 nova_compute[235132]: 2025-10-10 10:17:18.728 2 DEBUG nova.network.neutron [req-2f02eaf0-ee31-4ee2-ac15-69f3779ba54d req-836d3e3a-be5e-4e45-8d66-79a012d23c45 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Refreshing network info cache for port 562e8418-d47e-4fd1-8a23-094e0ce40097 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 10 10:17:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:17:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:18.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:17:18 compute-1 nova_compute[235132]: 2025-10-10 10:17:18.824 2 DEBUG oslo_concurrency.lockutils [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:17:18 compute-1 nova_compute[235132]: 2025-10-10 10:17:18.824 2 DEBUG oslo_concurrency.lockutils [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:17:18 compute-1 nova_compute[235132]: 2025-10-10 10:17:18.825 2 DEBUG oslo_concurrency.lockutils [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:17:18 compute-1 nova_compute[235132]: 2025-10-10 10:17:18.825 2 DEBUG oslo_concurrency.lockutils [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:17:18 compute-1 nova_compute[235132]: 2025-10-10 10:17:18.825 2 DEBUG oslo_concurrency.lockutils [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:17:18 compute-1 nova_compute[235132]: 2025-10-10 10:17:18.827 2 INFO nova.compute.manager [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Terminating instance
Oct 10 10:17:18 compute-1 nova_compute[235132]: 2025-10-10 10:17:18.828 2 DEBUG nova.compute.manager [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 10 10:17:18 compute-1 kernel: tap562e8418-d4 (unregistering): left promiscuous mode
Oct 10 10:17:18 compute-1 NetworkManager[44982]: <info>  [1760091438.8862] device (tap562e8418-d4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 10:17:18 compute-1 ovn_controller[131749]: 2025-10-10T10:17:18Z|00084|binding|INFO|Releasing lport 562e8418-d47e-4fd1-8a23-094e0ce40097 from this chassis (sb_readonly=0)
Oct 10 10:17:18 compute-1 ovn_controller[131749]: 2025-10-10T10:17:18Z|00085|binding|INFO|Setting lport 562e8418-d47e-4fd1-8a23-094e0ce40097 down in Southbound
Oct 10 10:17:18 compute-1 nova_compute[235132]: 2025-10-10 10:17:18.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:18 compute-1 ovn_controller[131749]: 2025-10-10T10:17:18Z|00086|binding|INFO|Removing iface tap562e8418-d4 ovn-installed in OVS
Oct 10 10:17:18 compute-1 nova_compute[235132]: 2025-10-10 10:17:18.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:18 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:17:18.910 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:73:fc:1f 10.100.0.12'], port_security=['fa:16:3e:73:fc:1f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'bd82d620-e0e5-4fb1-b8a5-973cefbcd107', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebfb122d-a6ca-4257-952a-e1a888448e1c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5f14a6f9-41f9-49f8-b407-62ca2cdc0259', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46d717de-5083-46ba-b06e-f3ccc6cb202a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], logical_port=562e8418-d47e-4fd1-8a23-094e0ce40097) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:17:18 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:17:18.913 141156 INFO neutron.agent.ovn.metadata.agent [-] Port 562e8418-d47e-4fd1-8a23-094e0ce40097 in datapath ebfb122d-a6ca-4257-952a-e1a888448e1c unbound from our chassis
Oct 10 10:17:18 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:17:18.914 141156 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ebfb122d-a6ca-4257-952a-e1a888448e1c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 10 10:17:18 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:17:18.915 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[3750a1de-3d1a-4491-a366-23a5b11d0170]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:18 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:17:18.916 141156 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c namespace which is not needed anymore
Oct 10 10:17:18 compute-1 nova_compute[235132]: 2025-10-10 10:17:18.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:18 compute-1 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000006.scope: Deactivated successfully.
Oct 10 10:17:18 compute-1 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000006.scope: Consumed 20.122s CPU time.
Oct 10 10:17:18 compute-1 systemd-machined[191637]: Machine qemu-4-instance-00000006 terminated.
Oct 10 10:17:19 compute-1 NetworkManager[44982]: <info>  [1760091439.0489] manager: (tap562e8418-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/59)
Oct 10 10:17:19 compute-1 nova_compute[235132]: 2025-10-10 10:17:19.074 2 INFO nova.virt.libvirt.driver [-] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Instance destroyed successfully.
Oct 10 10:17:19 compute-1 nova_compute[235132]: 2025-10-10 10:17:19.075 2 DEBUG nova.objects.instance [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'resources' on Instance uuid bd82d620-e0e5-4fb1-b8a5-973cefbcd107 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:17:19 compute-1 neutron-haproxy-ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c[243159]: [NOTICE]   (243163) : haproxy version is 2.8.14-c23fe91
Oct 10 10:17:19 compute-1 neutron-haproxy-ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c[243159]: [NOTICE]   (243163) : path to executable is /usr/sbin/haproxy
Oct 10 10:17:19 compute-1 neutron-haproxy-ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c[243159]: [WARNING]  (243163) : Exiting Master process...
Oct 10 10:17:19 compute-1 neutron-haproxy-ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c[243159]: [WARNING]  (243163) : Exiting Master process...
Oct 10 10:17:19 compute-1 neutron-haproxy-ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c[243159]: [ALERT]    (243163) : Current worker (243165) exited with code 143 (Terminated)
Oct 10 10:17:19 compute-1 neutron-haproxy-ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c[243159]: [WARNING]  (243163) : All workers exited. Exiting... (0)
Oct 10 10:17:19 compute-1 systemd[1]: libpod-35ed349cd81e0090d1cc262ca291722ba38bb459c9f6b6d929daf427a5465c69.scope: Deactivated successfully.
Oct 10 10:17:19 compute-1 podman[245593]: 2025-10-10 10:17:19.09695412 +0000 UTC m=+0.055528134 container died 35ed349cd81e0090d1cc262ca291722ba38bb459c9f6b6d929daf427a5465c69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 10 10:17:19 compute-1 nova_compute[235132]: 2025-10-10 10:17:19.103 2 DEBUG nova.virt.libvirt.vif [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:15:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-217348562',display_name='tempest-TestNetworkBasicOps-server-217348562',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-217348562',id=6,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHLzdAHDdUURWXD0NwbnzkKciFKEQ2omFqKVpiZUQ/jQkwx0IlaJ48FUTUghTozEFkbgWKl3XHIfnAKs6ai2Am8DZErVGD6iO1tzsuGiO5n1KsYJdS5ZP3lMvRFTeABsRg==',key_name='tempest-TestNetworkBasicOps-1625432950',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:15:28Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-qt8amg0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:15:28Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=bd82d620-e0e5-4fb1-b8a5-973cefbcd107,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 10 10:17:19 compute-1 nova_compute[235132]: 2025-10-10 10:17:19.103 2 DEBUG nova.network.os_vif_util [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:17:19 compute-1 nova_compute[235132]: 2025-10-10 10:17:19.104 2 DEBUG nova.network.os_vif_util [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:73:fc:1f,bridge_name='br-int',has_traffic_filtering=True,id=562e8418-d47e-4fd1-8a23-094e0ce40097,network=Network(ebfb122d-a6ca-4257-952a-e1a888448e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap562e8418-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:17:19 compute-1 nova_compute[235132]: 2025-10-10 10:17:19.105 2 DEBUG os_vif [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:73:fc:1f,bridge_name='br-int',has_traffic_filtering=True,id=562e8418-d47e-4fd1-8a23-094e0ce40097,network=Network(ebfb122d-a6ca-4257-952a-e1a888448e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap562e8418-d4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 10 10:17:19 compute-1 nova_compute[235132]: 2025-10-10 10:17:19.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:19 compute-1 nova_compute[235132]: 2025-10-10 10:17:19.107 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap562e8418-d4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:17:19 compute-1 nova_compute[235132]: 2025-10-10 10:17:19.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:19 compute-1 nova_compute[235132]: 2025-10-10 10:17:19.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:19 compute-1 nova_compute[235132]: 2025-10-10 10:17:19.112 2 INFO os_vif [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:73:fc:1f,bridge_name='br-int',has_traffic_filtering=True,id=562e8418-d47e-4fd1-8a23-094e0ce40097,network=Network(ebfb122d-a6ca-4257-952a-e1a888448e1c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap562e8418-d4')
Oct 10 10:17:19 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-35ed349cd81e0090d1cc262ca291722ba38bb459c9f6b6d929daf427a5465c69-userdata-shm.mount: Deactivated successfully.
Oct 10 10:17:19 compute-1 systemd[1]: var-lib-containers-storage-overlay-3b1835549180fe222ffd4c8fc7255dc61386526a292af096d5df92e7189879c8-merged.mount: Deactivated successfully.
Oct 10 10:17:19 compute-1 podman[245593]: 2025-10-10 10:17:19.138059777 +0000 UTC m=+0.096633771 container cleanup 35ed349cd81e0090d1cc262ca291722ba38bb459c9f6b6d929daf427a5465c69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 10 10:17:19 compute-1 nova_compute[235132]: 2025-10-10 10:17:19.151 2 DEBUG nova.compute.manager [req-7695ecbc-7ab0-472f-82b3-4a260e6e9af7 req-f5364fca-4463-48d4-95b5-a036ce4e40be 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received event network-vif-plugged-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:17:19 compute-1 nova_compute[235132]: 2025-10-10 10:17:19.151 2 DEBUG oslo_concurrency.lockutils [req-7695ecbc-7ab0-472f-82b3-4a260e6e9af7 req-f5364fca-4463-48d4-95b5-a036ce4e40be 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:17:19 compute-1 systemd[1]: libpod-conmon-35ed349cd81e0090d1cc262ca291722ba38bb459c9f6b6d929daf427a5465c69.scope: Deactivated successfully.
Oct 10 10:17:19 compute-1 nova_compute[235132]: 2025-10-10 10:17:19.152 2 DEBUG oslo_concurrency.lockutils [req-7695ecbc-7ab0-472f-82b3-4a260e6e9af7 req-f5364fca-4463-48d4-95b5-a036ce4e40be 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:17:19 compute-1 nova_compute[235132]: 2025-10-10 10:17:19.152 2 DEBUG oslo_concurrency.lockutils [req-7695ecbc-7ab0-472f-82b3-4a260e6e9af7 req-f5364fca-4463-48d4-95b5-a036ce4e40be 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:17:19 compute-1 nova_compute[235132]: 2025-10-10 10:17:19.153 2 DEBUG nova.compute.manager [req-7695ecbc-7ab0-472f-82b3-4a260e6e9af7 req-f5364fca-4463-48d4-95b5-a036ce4e40be 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] No waiting events found dispatching network-vif-plugged-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:17:19 compute-1 nova_compute[235132]: 2025-10-10 10:17:19.153 2 WARNING nova.compute.manager [req-7695ecbc-7ab0-472f-82b3-4a260e6e9af7 req-f5364fca-4463-48d4-95b5-a036ce4e40be 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received unexpected event network-vif-plugged-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c for instance with vm_state active and task_state deleting.
Oct 10 10:17:19 compute-1 nova_compute[235132]: 2025-10-10 10:17:19.154 2 DEBUG nova.compute.manager [req-7695ecbc-7ab0-472f-82b3-4a260e6e9af7 req-f5364fca-4463-48d4-95b5-a036ce4e40be 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received event network-vif-deleted-a6efe4ab-2a26-46aa-8bf2-3dda99ea238c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:17:19 compute-1 podman[245646]: 2025-10-10 10:17:19.208551149 +0000 UTC m=+0.045787787 container remove 35ed349cd81e0090d1cc262ca291722ba38bb459c9f6b6d929daf427a5465c69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct 10 10:17:19 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:17:19.214 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[0b068601-a48f-458f-aa6a-d8116001be85]: (4, ('Fri Oct 10 10:17:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c (35ed349cd81e0090d1cc262ca291722ba38bb459c9f6b6d929daf427a5465c69)\n35ed349cd81e0090d1cc262ca291722ba38bb459c9f6b6d929daf427a5465c69\nFri Oct 10 10:17:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c (35ed349cd81e0090d1cc262ca291722ba38bb459c9f6b6d929daf427a5465c69)\n35ed349cd81e0090d1cc262ca291722ba38bb459c9f6b6d929daf427a5465c69\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:19 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:17:19.216 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[66a47aa3-8980-4a22-9f93-7e2f65178c73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:19 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:17:19.218 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebfb122d-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:17:19 compute-1 nova_compute[235132]: 2025-10-10 10:17:19.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:19 compute-1 kernel: tapebfb122d-a0: left promiscuous mode
Oct 10 10:17:19 compute-1 nova_compute[235132]: 2025-10-10 10:17:19.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:19 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:17:19.227 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[04a0bc56-2fee-44f3-925f-024c253da9c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:19 compute-1 nova_compute[235132]: 2025-10-10 10:17:19.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:19 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:17:19.255 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[37a521a4-d6c9-4159-baee-2d9296979ae4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:19 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:17:19.257 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[dab8f559-26ec-41f3-b010-ab6ba8cf4e70]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:19 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:17:19.278 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[17e73ef2-5fc0-4843-837f-fb22759d296f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 424115, 'reachable_time': 24479, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245664, 'error': None, 'target': 'ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:19 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:17:19.281 141275 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ebfb122d-a6ca-4257-952a-e1a888448e1c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 10 10:17:19 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:17:19.281 141275 DEBUG oslo.privsep.daemon [-] privsep: reply[666a0a7a-bd6d-464d-8381-4daa7e263b94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:17:19 compute-1 systemd[1]: run-netns-ovnmeta\x2debfb122d\x2da6ca\x2d4257\x2d952a\x2de1a888448e1c.mount: Deactivated successfully.
Oct 10 10:17:19 compute-1 sudo[245665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:17:19 compute-1 sudo[245665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:17:19 compute-1 sudo[245665]: pam_unix(sudo:session): session closed for user root
Oct 10 10:17:19 compute-1 nova_compute[235132]: 2025-10-10 10:17:19.619 2 INFO nova.virt.libvirt.driver [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Deleting instance files /var/lib/nova/instances/bd82d620-e0e5-4fb1-b8a5-973cefbcd107_del
Oct 10 10:17:19 compute-1 nova_compute[235132]: 2025-10-10 10:17:19.621 2 INFO nova.virt.libvirt.driver [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Deletion of /var/lib/nova/instances/bd82d620-e0e5-4fb1-b8a5-973cefbcd107_del complete
Oct 10 10:17:19 compute-1 nova_compute[235132]: 2025-10-10 10:17:19.689 2 INFO nova.compute.manager [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Took 0.86 seconds to destroy the instance on the hypervisor.
Oct 10 10:17:19 compute-1 nova_compute[235132]: 2025-10-10 10:17:19.690 2 DEBUG oslo.service.loopingcall [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 10 10:17:19 compute-1 nova_compute[235132]: 2025-10-10 10:17:19.690 2 DEBUG nova.compute.manager [-] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 10 10:17:19 compute-1 nova_compute[235132]: 2025-10-10 10:17:19.690 2 DEBUG nova.network.neutron [-] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 10 10:17:19 compute-1 nova_compute[235132]: 2025-10-10 10:17:19.944 2 DEBUG nova.network.neutron [req-2f02eaf0-ee31-4ee2-ac15-69f3779ba54d req-836d3e3a-be5e-4e45-8d66-79a012d23c45 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updated VIF entry in instance network info cache for port 562e8418-d47e-4fd1-8a23-094e0ce40097. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 10 10:17:19 compute-1 nova_compute[235132]: 2025-10-10 10:17:19.945 2 DEBUG nova.network.neutron [req-2f02eaf0-ee31-4ee2-ac15-69f3779ba54d req-836d3e3a-be5e-4e45-8d66-79a012d23c45 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updating instance_info_cache with network_info: [{"id": "562e8418-d47e-4fd1-8a23-094e0ce40097", "address": "fa:16:3e:73:fc:1f", "network": {"id": "ebfb122d-a6ca-4257-952a-e1a888448e1c", "bridge": "br-int", "label": "tempest-network-smoke--1217728793", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap562e8418-d4", "ovs_interfaceid": "562e8418-d47e-4fd1-8a23-094e0ce40097", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:17:19 compute-1 nova_compute[235132]: 2025-10-10 10:17:19.964 2 DEBUG oslo_concurrency.lockutils [req-2f02eaf0-ee31-4ee2-ac15-69f3779ba54d req-836d3e3a-be5e-4e45-8d66-79a012d23c45 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-bd82d620-e0e5-4fb1-b8a5-973cefbcd107" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:17:20 compute-1 ceph-mon[79167]: pgmap v941: 353 pgs: 353 active+clean; 121 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 5.2 KiB/s wr, 207 op/s
Oct 10 10:17:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:20.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:20 compute-1 nova_compute[235132]: 2025-10-10 10:17:20.526 2 DEBUG nova.network.neutron [-] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:17:20 compute-1 nova_compute[235132]: 2025-10-10 10:17:20.548 2 INFO nova.compute.manager [-] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Took 0.86 seconds to deallocate network for instance.
Oct 10 10:17:20 compute-1 nova_compute[235132]: 2025-10-10 10:17:20.601 2 DEBUG oslo_concurrency.lockutils [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:17:20 compute-1 nova_compute[235132]: 2025-10-10 10:17:20.602 2 DEBUG oslo_concurrency.lockutils [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:17:20 compute-1 nova_compute[235132]: 2025-10-10 10:17:20.694 2 DEBUG oslo_concurrency.processutils [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:17:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:20.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:20 compute-1 nova_compute[235132]: 2025-10-10 10:17:20.826 2 DEBUG nova.compute.manager [req-873ca703-7179-411d-91db-0d970ad932f4 req-1c67c0ab-9283-40c2-a19c-e449073f501b 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received event network-vif-unplugged-562e8418-d47e-4fd1-8a23-094e0ce40097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:17:20 compute-1 nova_compute[235132]: 2025-10-10 10:17:20.827 2 DEBUG oslo_concurrency.lockutils [req-873ca703-7179-411d-91db-0d970ad932f4 req-1c67c0ab-9283-40c2-a19c-e449073f501b 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:17:20 compute-1 nova_compute[235132]: 2025-10-10 10:17:20.828 2 DEBUG oslo_concurrency.lockutils [req-873ca703-7179-411d-91db-0d970ad932f4 req-1c67c0ab-9283-40c2-a19c-e449073f501b 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:17:20 compute-1 nova_compute[235132]: 2025-10-10 10:17:20.828 2 DEBUG oslo_concurrency.lockutils [req-873ca703-7179-411d-91db-0d970ad932f4 req-1c67c0ab-9283-40c2-a19c-e449073f501b 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:17:20 compute-1 nova_compute[235132]: 2025-10-10 10:17:20.829 2 DEBUG nova.compute.manager [req-873ca703-7179-411d-91db-0d970ad932f4 req-1c67c0ab-9283-40c2-a19c-e449073f501b 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] No waiting events found dispatching network-vif-unplugged-562e8418-d47e-4fd1-8a23-094e0ce40097 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:17:20 compute-1 nova_compute[235132]: 2025-10-10 10:17:20.830 2 WARNING nova.compute.manager [req-873ca703-7179-411d-91db-0d970ad932f4 req-1c67c0ab-9283-40c2-a19c-e449073f501b 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received unexpected event network-vif-unplugged-562e8418-d47e-4fd1-8a23-094e0ce40097 for instance with vm_state deleted and task_state None.
Oct 10 10:17:20 compute-1 nova_compute[235132]: 2025-10-10 10:17:20.830 2 DEBUG nova.compute.manager [req-873ca703-7179-411d-91db-0d970ad932f4 req-1c67c0ab-9283-40c2-a19c-e449073f501b 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received event network-vif-plugged-562e8418-d47e-4fd1-8a23-094e0ce40097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:17:20 compute-1 nova_compute[235132]: 2025-10-10 10:17:20.831 2 DEBUG oslo_concurrency.lockutils [req-873ca703-7179-411d-91db-0d970ad932f4 req-1c67c0ab-9283-40c2-a19c-e449073f501b 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:17:20 compute-1 nova_compute[235132]: 2025-10-10 10:17:20.832 2 DEBUG oslo_concurrency.lockutils [req-873ca703-7179-411d-91db-0d970ad932f4 req-1c67c0ab-9283-40c2-a19c-e449073f501b 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:17:20 compute-1 nova_compute[235132]: 2025-10-10 10:17:20.832 2 DEBUG oslo_concurrency.lockutils [req-873ca703-7179-411d-91db-0d970ad932f4 req-1c67c0ab-9283-40c2-a19c-e449073f501b 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:17:20 compute-1 nova_compute[235132]: 2025-10-10 10:17:20.833 2 DEBUG nova.compute.manager [req-873ca703-7179-411d-91db-0d970ad932f4 req-1c67c0ab-9283-40c2-a19c-e449073f501b 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] No waiting events found dispatching network-vif-plugged-562e8418-d47e-4fd1-8a23-094e0ce40097 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:17:20 compute-1 nova_compute[235132]: 2025-10-10 10:17:20.833 2 WARNING nova.compute.manager [req-873ca703-7179-411d-91db-0d970ad932f4 req-1c67c0ab-9283-40c2-a19c-e449073f501b 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received unexpected event network-vif-plugged-562e8418-d47e-4fd1-8a23-094e0ce40097 for instance with vm_state deleted and task_state None.
Oct 10 10:17:20 compute-1 nova_compute[235132]: 2025-10-10 10:17:20.834 2 DEBUG nova.compute.manager [req-873ca703-7179-411d-91db-0d970ad932f4 req-1c67c0ab-9283-40c2-a19c-e449073f501b 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Received event network-vif-deleted-562e8418-d47e-4fd1-8a23-094e0ce40097 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:17:21 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:17:21 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3931265671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:17:21 compute-1 nova_compute[235132]: 2025-10-10 10:17:21.178 2 DEBUG oslo_concurrency.processutils [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:17:21 compute-1 nova_compute[235132]: 2025-10-10 10:17:21.185 2 DEBUG nova.compute.provider_tree [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:17:21 compute-1 nova_compute[235132]: 2025-10-10 10:17:21.211 2 DEBUG nova.scheduler.client.report [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:17:21 compute-1 nova_compute[235132]: 2025-10-10 10:17:21.246 2 DEBUG oslo_concurrency.lockutils [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:17:21 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3931265671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:17:21 compute-1 nova_compute[235132]: 2025-10-10 10:17:21.281 2 INFO nova.scheduler.client.report [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Deleted allocations for instance bd82d620-e0e5-4fb1-b8a5-973cefbcd107
Oct 10 10:17:21 compute-1 nova_compute[235132]: 2025-10-10 10:17:21.377 2 DEBUG oslo_concurrency.lockutils [None req-2e3d9ca4-de58-465a-b664-3302ebf248d4 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "bd82d620-e0e5-4fb1-b8a5-973cefbcd107" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:17:22 compute-1 ceph-mon[79167]: pgmap v942: 353 pgs: 353 active+clean; 121 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 29 op/s
Oct 10 10:17:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:17:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:22.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:17:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:17:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:22.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:17:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:17:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:17:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:17:23 compute-1 nova_compute[235132]: 2025-10-10 10:17:23.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:24 compute-1 nova_compute[235132]: 2025-10-10 10:17:24.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:24 compute-1 ceph-mon[79167]: pgmap v943: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 6.0 KiB/s wr, 57 op/s
Oct 10 10:17:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:24.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:17:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:24.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:17:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:26.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:26 compute-1 ceph-mon[79167]: pgmap v944: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.8 KiB/s wr, 29 op/s
Oct 10 10:17:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:26.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:27 compute-1 nova_compute[235132]: 2025-10-10 10:17:27.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:27 compute-1 nova_compute[235132]: 2025-10-10 10:17:27.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:27 compute-1 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 10 10:17:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/3051023765' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:17:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/3051023765' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:17:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:17:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:17:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:17:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:17:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:17:28 compute-1 nova_compute[235132]: 2025-10-10 10:17:28.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.002000054s ======
Oct 10 10:17:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:28.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Oct 10 10:17:28 compute-1 ceph-mon[79167]: pgmap v945: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.8 KiB/s wr, 29 op/s
Oct 10 10:17:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:28.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:29 compute-1 nova_compute[235132]: 2025-10-10 10:17:29.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:29 compute-1 podman[245722]: 2025-10-10 10:17:29.991972582 +0000 UTC m=+0.084992381 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 10 10:17:30 compute-1 podman[245721]: 2025-10-10 10:17:30.017375378 +0000 UTC m=+0.111108977 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 10 10:17:30 compute-1 podman[245723]: 2025-10-10 10:17:30.035035782 +0000 UTC m=+0.113401549 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 10 10:17:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:30.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:30 compute-1 ceph-mon[79167]: pgmap v946: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.8 KiB/s wr, 29 op/s
Oct 10 10:17:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:30.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:31 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:17:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:17:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:32.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:17:32 compute-1 ceph-mon[79167]: pgmap v947: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.8 KiB/s wr, 29 op/s
Oct 10 10:17:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:17:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:32.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:17:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:17:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:17:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:17:33 compute-1 nova_compute[235132]: 2025-10-10 10:17:33.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:33 compute-1 ceph-mon[79167]: pgmap v948: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.8 KiB/s wr, 29 op/s
Oct 10 10:17:34 compute-1 nova_compute[235132]: 2025-10-10 10:17:34.072 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760091439.0712006, bd82d620-e0e5-4fb1-b8a5-973cefbcd107 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:17:34 compute-1 nova_compute[235132]: 2025-10-10 10:17:34.073 2 INFO nova.compute.manager [-] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] VM Stopped (Lifecycle Event)
Oct 10 10:17:34 compute-1 nova_compute[235132]: 2025-10-10 10:17:34.093 2 DEBUG nova.compute.manager [None req-26863534-1111-457a-810e-ee01ebc734ec - - - - - -] [instance: bd82d620-e0e5-4fb1-b8a5-973cefbcd107] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:17:34 compute-1 nova_compute[235132]: 2025-10-10 10:17:34.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:17:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:34.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:17:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:34.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:36 compute-1 ceph-mon[79167]: pgmap v949: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:17:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:36.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:17:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:36.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:17:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:17:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:17:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:17:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:17:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:17:38 compute-1 ceph-mon[79167]: pgmap v950: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:17:38 compute-1 nova_compute[235132]: 2025-10-10 10:17:38.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:17:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:38.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:17:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:38.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:39 compute-1 nova_compute[235132]: 2025-10-10 10:17:39.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:39 compute-1 sudo[245789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:17:39 compute-1 sudo[245789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:17:39 compute-1 sudo[245789]: pam_unix(sudo:session): session closed for user root
Oct 10 10:17:40 compute-1 ceph-mon[79167]: pgmap v951: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:17:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:40.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:40.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:42 compute-1 ceph-mon[79167]: pgmap v952: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:17:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:17:42.214 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:17:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:17:42.214 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:17:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:17:42.215 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:17:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:42.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:17:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:42.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:17:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:17:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:17:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:17:43 compute-1 nova_compute[235132]: 2025-10-10 10:17:43.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:44 compute-1 nova_compute[235132]: 2025-10-10 10:17:44.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:44 compute-1 ceph-mon[79167]: pgmap v953: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:17:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:44.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:17:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:44.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:17:46 compute-1 ceph-mon[79167]: pgmap v954: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:17:46 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/710863530' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:17:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:46.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:46.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:46 compute-1 podman[245817]: 2025-10-10 10:17:46.981178911 +0000 UTC m=+0.078577945 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 10 10:17:47 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:17:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:17:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:17:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:17:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:17:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:17:48 compute-1 nova_compute[235132]: 2025-10-10 10:17:48.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:48 compute-1 ceph-mon[79167]: pgmap v955: 353 pgs: 353 active+clean; 41 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:17:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:48.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:17:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:48.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:17:49 compute-1 nova_compute[235132]: 2025-10-10 10:17:49.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:49 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3477895027' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:17:49 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1219895572' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:17:50 compute-1 ceph-mon[79167]: pgmap v956: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:17:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:50.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:50.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:52 compute-1 ceph-mon[79167]: pgmap v957: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:17:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:17:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:52.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:17:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:17:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:52.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:17:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:17:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:17:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:17:53 compute-1 nova_compute[235132]: 2025-10-10 10:17:53.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:17:53 compute-1 nova_compute[235132]: 2025-10-10 10:17:53.044 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:17:53 compute-1 nova_compute[235132]: 2025-10-10 10:17:53.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:54 compute-1 nova_compute[235132]: 2025-10-10 10:17:54.039 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:17:54 compute-1 nova_compute[235132]: 2025-10-10 10:17:54.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:54 compute-1 ceph-mon[79167]: pgmap v958: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Oct 10 10:17:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:54.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:17:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:54.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:17:55 compute-1 nova_compute[235132]: 2025-10-10 10:17:55.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:17:56 compute-1 ceph-mon[79167]: pgmap v959: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Oct 10 10:17:56 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1465875043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:17:56 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/3077656643' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:17:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:56.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:56.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:57 compute-1 nova_compute[235132]: 2025-10-10 10:17:57.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:17:57 compute-1 nova_compute[235132]: 2025-10-10 10:17:57.044 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:17:57 compute-1 nova_compute[235132]: 2025-10-10 10:17:57.044 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:17:57 compute-1 nova_compute[235132]: 2025-10-10 10:17:57.064 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:17:57 compute-1 nova_compute[235132]: 2025-10-10 10:17:57.065 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:17:57 compute-1 nova_compute[235132]: 2025-10-10 10:17:57.065 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:17:57 compute-1 nova_compute[235132]: 2025-10-10 10:17:57.066 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:17:57 compute-1 sudo[245842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:17:57 compute-1 sudo[245842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:17:57 compute-1 sudo[245842]: pam_unix(sudo:session): session closed for user root
Oct 10 10:17:57 compute-1 sudo[245867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:17:57 compute-1 sudo[245867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:17:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:17:57 compute-1 sudo[245867]: pam_unix(sudo:session): session closed for user root
Oct 10 10:17:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:17:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:17:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:17:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:17:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:17:58 compute-1 nova_compute[235132]: 2025-10-10 10:17:58.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:58 compute-1 ceph-mon[79167]: pgmap v960: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Oct 10 10:17:58 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:17:58 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:17:58 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:17:58 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:17:58 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:17:58 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:17:58 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:17:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:17:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:17:58.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:17:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:17:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:17:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:17:58.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:17:59 compute-1 nova_compute[235132]: 2025-10-10 10:17:59.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:17:59 compute-1 sudo[245925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:17:59 compute-1 sudo[245925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:17:59 compute-1 sudo[245925]: pam_unix(sudo:session): session closed for user root
Oct 10 10:18:00 compute-1 nova_compute[235132]: 2025-10-10 10:18:00.061 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:18:00 compute-1 ceph-mon[79167]: pgmap v961: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 10 10:18:00 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/872784269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:18:00 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3277334980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:18:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.003000082s ======
Oct 10 10:18:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:00.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000082s
Oct 10 10:18:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:18:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:00.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:18:00 compute-1 podman[245951]: 2025-10-10 10:18:00.989251023 +0000 UTC m=+0.082110462 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 10 10:18:00 compute-1 podman[245950]: 2025-10-10 10:18:00.993390757 +0000 UTC m=+0.084905399 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid)
Oct 10 10:18:01 compute-1 podman[245952]: 2025-10-10 10:18:01.035270924 +0000 UTC m=+0.122074177 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 10 10:18:01 compute-1 nova_compute[235132]: 2025-10-10 10:18:01.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:18:01 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3477850028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:18:02 compute-1 nova_compute[235132]: 2025-10-10 10:18:02.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:18:02 compute-1 nova_compute[235132]: 2025-10-10 10:18:02.080 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:18:02 compute-1 nova_compute[235132]: 2025-10-10 10:18:02.080 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:18:02 compute-1 nova_compute[235132]: 2025-10-10 10:18:02.081 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:18:02 compute-1 nova_compute[235132]: 2025-10-10 10:18:02.081 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:18:02 compute-1 nova_compute[235132]: 2025-10-10 10:18:02.081 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:18:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:18:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:02.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:18:02 compute-1 ceph-mon[79167]: pgmap v962: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct 10 10:18:02 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:18:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:18:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:18:02 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3550810255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:18:02 compute-1 nova_compute[235132]: 2025-10-10 10:18:02.567 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:18:02 compute-1 nova_compute[235132]: 2025-10-10 10:18:02.771 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:18:02 compute-1 nova_compute[235132]: 2025-10-10 10:18:02.773 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4914MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:18:02 compute-1 nova_compute[235132]: 2025-10-10 10:18:02.773 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:18:02 compute-1 nova_compute[235132]: 2025-10-10 10:18:02.774 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:18:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:18:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:02.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:18:02 compute-1 nova_compute[235132]: 2025-10-10 10:18:02.853 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:18:02 compute-1 nova_compute[235132]: 2025-10-10 10:18:02.854 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:18:02 compute-1 nova_compute[235132]: 2025-10-10 10:18:02.873 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:18:02 compute-1 sudo[246038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:18:02 compute-1 sudo[246038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:18:02 compute-1 sudo[246038]: pam_unix(sudo:session): session closed for user root
Oct 10 10:18:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:18:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:18:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:18:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:18:03 compute-1 nova_compute[235132]: 2025-10-10 10:18:03.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:03 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:18:03 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3370266523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:18:03 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3550810255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:18:03 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:18:03 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:18:03 compute-1 nova_compute[235132]: 2025-10-10 10:18:03.381 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:18:03 compute-1 nova_compute[235132]: 2025-10-10 10:18:03.388 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:18:03 compute-1 nova_compute[235132]: 2025-10-10 10:18:03.410 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:18:03 compute-1 nova_compute[235132]: 2025-10-10 10:18:03.442 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:18:03 compute-1 nova_compute[235132]: 2025-10-10 10:18:03.442 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:18:04 compute-1 nova_compute[235132]: 2025-10-10 10:18:04.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:18:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:04.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:18:04 compute-1 ceph-mon[79167]: pgmap v963: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct 10 10:18:04 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3370266523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:18:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:04.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:05 compute-1 ceph-mon[79167]: pgmap v964: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 156 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Oct 10 10:18:06 compute-1 ovn_controller[131749]: 2025-10-10T10:18:06Z|00087|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Oct 10 10:18:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:06.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:18:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:06.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:18:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:18:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:18:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:18:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:18:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:18:08 compute-1 ceph-mon[79167]: pgmap v965: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 156 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Oct 10 10:18:08 compute-1 nova_compute[235132]: 2025-10-10 10:18:08.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:08.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:08.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:09 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2863660861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:18:09 compute-1 nova_compute[235132]: 2025-10-10 10:18:09.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:10 compute-1 ceph-mon[79167]: pgmap v966: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 156 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Oct 10 10:18:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:10.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:18:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:10.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:18:11.165030) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091491165066, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1181, "num_deletes": 501, "total_data_size": 1951641, "memory_usage": 1978856, "flush_reason": "Manual Compaction"}
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091491175369, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 897365, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30018, "largest_seqno": 31194, "table_properties": {"data_size": 893126, "index_size": 1379, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13960, "raw_average_key_size": 19, "raw_value_size": 882191, "raw_average_value_size": 1232, "num_data_blocks": 61, "num_entries": 716, "num_filter_entries": 716, "num_deletions": 501, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760091425, "oldest_key_time": 1760091425, "file_creation_time": 1760091491, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 10406 microseconds, and 4129 cpu microseconds.
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:18:11.175430) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 897365 bytes OK
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:18:11.175459) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:18:11.177276) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:18:11.177297) EVENT_LOG_v1 {"time_micros": 1760091491177290, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:18:11.177350) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1944925, prev total WAL file size 1944925, number of live WAL files 2.
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:18:11.179004) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(876KB)], [57(16MB)]
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091491179064, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 17822793, "oldest_snapshot_seqno": -1}
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 5754 keys, 12048907 bytes, temperature: kUnknown
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091491252376, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 12048907, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12012884, "index_size": 20553, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14405, "raw_key_size": 148801, "raw_average_key_size": 25, "raw_value_size": 11911331, "raw_average_value_size": 2070, "num_data_blocks": 824, "num_entries": 5754, "num_filter_entries": 5754, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760091491, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:18:11.252757) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 12048907 bytes
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:18:11.256452) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 242.7 rd, 164.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 16.1 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(33.3) write-amplify(13.4) OK, records in: 6747, records dropped: 993 output_compression: NoCompression
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:18:11.256481) EVENT_LOG_v1 {"time_micros": 1760091491256467, "job": 34, "event": "compaction_finished", "compaction_time_micros": 73428, "compaction_time_cpu_micros": 50731, "output_level": 6, "num_output_files": 1, "total_output_size": 12048907, "num_input_records": 6747, "num_output_records": 5754, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091491256849, "job": 34, "event": "table_file_deletion", "file_number": 59}
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091491260610, "job": 34, "event": "table_file_deletion", "file_number": 57}
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:18:11.178890) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:18:11.260960) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:18:11.260994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:18:11.260999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:18:11.261003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:18:11 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:18:11.261007) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:18:12 compute-1 ceph-mon[79167]: pgmap v967: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:18:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:12.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:18:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:18:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:12.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:18:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:18:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:18:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:18:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:18:13 compute-1 nova_compute[235132]: 2025-10-10 10:18:13.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:14 compute-1 ceph-mon[79167]: pgmap v968: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:18:14 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2249811561' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:18:14 compute-1 nova_compute[235132]: 2025-10-10 10:18:14.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:18:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:14.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:18:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:18:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:14.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:18:15 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1035205217' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:18:16 compute-1 ceph-mon[79167]: pgmap v969: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:18:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:18:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:16.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:18:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:16.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:17 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:18:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:18:17 compute-1 podman[246094]: 2025-10-10 10:18:17.967282915 +0000 UTC m=+0.071940914 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:18:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:18:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:18:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:18:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:18:18 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:18:18.024 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:18:18 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:18:18.026 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 10 10:18:18 compute-1 nova_compute[235132]: 2025-10-10 10:18:18.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:18 compute-1 nova_compute[235132]: 2025-10-10 10:18:18.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:18 compute-1 ceph-mon[79167]: pgmap v970: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:18:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:18:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:18.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:18:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:18:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:18.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:18:19 compute-1 nova_compute[235132]: 2025-10-10 10:18:19.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:19 compute-1 sudo[246114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:18:19 compute-1 sudo[246114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:18:19 compute-1 sudo[246114]: pam_unix(sudo:session): session closed for user root
Oct 10 10:18:20 compute-1 ceph-mon[79167]: pgmap v971: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct 10 10:18:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:20.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 10:18:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:20.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 10:18:22 compute-1 ceph-mon[79167]: pgmap v972: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct 10 10:18:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:22.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:18:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:22.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:18:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:18:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:18:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:18:23 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:18:23.028 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ee0899c1-415d-4aa8-abe8-1240b4e8bf2c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:18:23 compute-1 nova_compute[235132]: 2025-10-10 10:18:23.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:24 compute-1 nova_compute[235132]: 2025-10-10 10:18:24.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:24 compute-1 ceph-mon[79167]: pgmap v973: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 10 10:18:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 10:18:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:24.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 10:18:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:24.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:26 compute-1 ceph-mon[79167]: pgmap v974: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 10 10:18:26 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1600136722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:18:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:26.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:26.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/1965479803' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:18:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/1965479803' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:18:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:18:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:18:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:18:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:18:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:18:28 compute-1 nova_compute[235132]: 2025-10-10 10:18:28.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:28 compute-1 ceph-mon[79167]: pgmap v975: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 10 10:18:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:28.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:18:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:28.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:18:29 compute-1 nova_compute[235132]: 2025-10-10 10:18:29.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:30 compute-1 ceph-mon[79167]: pgmap v976: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct 10 10:18:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:30.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:30.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:31 compute-1 ceph-mon[79167]: pgmap v977: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 91 op/s
Oct 10 10:18:31 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:18:31 compute-1 podman[246146]: 2025-10-10 10:18:31.987304786 +0000 UTC m=+0.082534153 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 10 10:18:31 compute-1 podman[246145]: 2025-10-10 10:18:31.988959272 +0000 UTC m=+0.082771131 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 10 10:18:32 compute-1 podman[246147]: 2025-10-10 10:18:32.043014473 +0000 UTC m=+0.122535360 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 10 10:18:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:32.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:18:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:32.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:18:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:18:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:18:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:18:33 compute-1 nova_compute[235132]: 2025-10-10 10:18:33.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:34 compute-1 ceph-mon[79167]: pgmap v978: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 91 op/s
Oct 10 10:18:34 compute-1 nova_compute[235132]: 2025-10-10 10:18:34.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.002000054s ======
Oct 10 10:18:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:34.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Oct 10 10:18:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:34.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:36 compute-1 ceph-mon[79167]: pgmap v979: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 10 10:18:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:18:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:36.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:18:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:18:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:36.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:18:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:18:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:18:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:18:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:18:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:18:38 compute-1 ceph-mon[79167]: pgmap v980: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 10 10:18:38 compute-1 nova_compute[235132]: 2025-10-10 10:18:38.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:38.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:38.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:39 compute-1 nova_compute[235132]: 2025-10-10 10:18:39.203 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:39 compute-1 sudo[246214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:18:39 compute-1 sudo[246214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:18:39 compute-1 sudo[246214]: pam_unix(sudo:session): session closed for user root
Oct 10 10:18:40 compute-1 ceph-mon[79167]: pgmap v981: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct 10 10:18:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:18:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:40.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:18:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:40.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:42 compute-1 ceph-mon[79167]: pgmap v982: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:18:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:18:42.216 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:18:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:18:42.216 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:18:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:18:42.217 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:18:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:42.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:18:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:18:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:42.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:18:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:18:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:18:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:18:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:18:43 compute-1 nova_compute[235132]: 2025-10-10 10:18:43.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:44 compute-1 nova_compute[235132]: 2025-10-10 10:18:44.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:44 compute-1 ceph-mon[79167]: pgmap v983: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:18:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:44.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:44.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:46 compute-1 ceph-mon[79167]: pgmap v984: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:18:46 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3655689645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:18:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:46.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:46.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:47 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:18:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:18:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:18:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:18:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:18:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:18:48 compute-1 nova_compute[235132]: 2025-10-10 10:18:48.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:48 compute-1 ceph-mon[79167]: pgmap v985: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:18:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:48.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:18:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:48.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:18:48 compute-1 podman[246243]: 2025-10-10 10:18:48.978198447 +0000 UTC m=+0.072999972 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct 10 10:18:49 compute-1 nova_compute[235132]: 2025-10-10 10:18:49.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:50 compute-1 ceph-mon[79167]: pgmap v986: 353 pgs: 353 active+clean; 88 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:18:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:50.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:18:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:50.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:18:52 compute-1 ceph-mon[79167]: pgmap v987: 353 pgs: 353 active+clean; 88 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:18:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:18:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:52.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:18:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:18:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:52.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:18:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:18:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:18:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:18:53 compute-1 nova_compute[235132]: 2025-10-10 10:18:53.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:54 compute-1 nova_compute[235132]: 2025-10-10 10:18:54.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:54 compute-1 ceph-mon[79167]: pgmap v988: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:18:54 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/434918906' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:18:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:18:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:54.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:18:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:54.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:55 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3376492106' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:18:55 compute-1 nova_compute[235132]: 2025-10-10 10:18:55.444 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:18:55 compute-1 nova_compute[235132]: 2025-10-10 10:18:55.445 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:18:55 compute-1 nova_compute[235132]: 2025-10-10 10:18:55.445 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:18:56 compute-1 nova_compute[235132]: 2025-10-10 10:18:56.040 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:18:56 compute-1 ceph-mon[79167]: pgmap v989: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:18:56 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/475021902' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:18:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:18:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:56.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:18:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:18:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:56.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:18:57 compute-1 nova_compute[235132]: 2025-10-10 10:18:57.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:18:57 compute-1 nova_compute[235132]: 2025-10-10 10:18:57.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:18:57 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/3962479693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:18:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:18:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:18:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:18:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:18:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:18:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:18:58 compute-1 nova_compute[235132]: 2025-10-10 10:18:58.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:18:58 compute-1 nova_compute[235132]: 2025-10-10 10:18:58.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:18:58 compute-1 nova_compute[235132]: 2025-10-10 10:18:58.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:18:58 compute-1 nova_compute[235132]: 2025-10-10 10:18:58.067 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:18:58 compute-1 nova_compute[235132]: 2025-10-10 10:18:58.068 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:18:58 compute-1 nova_compute[235132]: 2025-10-10 10:18:58.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:58 compute-1 ceph-mon[79167]: pgmap v990: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:18:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:18:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:18:58.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:18:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:18:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:18:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:18:58.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:18:59 compute-1 nova_compute[235132]: 2025-10-10 10:18:59.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:18:59 compute-1 sudo[246269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:18:59 compute-1 sudo[246269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:18:59 compute-1 sudo[246269]: pam_unix(sudo:session): session closed for user root
Oct 10 10:19:00 compute-1 ceph-mon[79167]: pgmap v991: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct 10 10:19:00 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/330076369' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:19:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:19:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:00.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:19:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:19:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:00.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:19:01 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1712929211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:19:01 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:19:02 compute-1 nova_compute[235132]: 2025-10-10 10:19:02.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:19:02 compute-1 nova_compute[235132]: 2025-10-10 10:19:02.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:19:02 compute-1 nova_compute[235132]: 2025-10-10 10:19:02.075 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:19:02 compute-1 nova_compute[235132]: 2025-10-10 10:19:02.076 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:19:02 compute-1 nova_compute[235132]: 2025-10-10 10:19:02.076 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:19:02 compute-1 nova_compute[235132]: 2025-10-10 10:19:02.076 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:19:02 compute-1 nova_compute[235132]: 2025-10-10 10:19:02.077 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:19:02 compute-1 ceph-mon[79167]: pgmap v992: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 12 KiB/s wr, 10 op/s
Oct 10 10:19:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:19:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:02.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:19:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:19:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:19:02 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3651707172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:19:02 compute-1 nova_compute[235132]: 2025-10-10 10:19:02.552 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:19:02 compute-1 nova_compute[235132]: 2025-10-10 10:19:02.704 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:19:02 compute-1 nova_compute[235132]: 2025-10-10 10:19:02.706 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4910MB free_disk=59.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:19:02 compute-1 nova_compute[235132]: 2025-10-10 10:19:02.706 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:19:02 compute-1 nova_compute[235132]: 2025-10-10 10:19:02.707 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:19:02 compute-1 nova_compute[235132]: 2025-10-10 10:19:02.776 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:19:02 compute-1 nova_compute[235132]: 2025-10-10 10:19:02.777 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:19:02 compute-1 nova_compute[235132]: 2025-10-10 10:19:02.818 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:19:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:02.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:02 compute-1 podman[246318]: 2025-10-10 10:19:02.970244093 +0000 UTC m=+0.066679049 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid)
Oct 10 10:19:02 compute-1 podman[246319]: 2025-10-10 10:19:02.981612584 +0000 UTC m=+0.068986031 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct 10 10:19:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:19:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:19:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:19:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:19:03 compute-1 podman[246325]: 2025-10-10 10:19:03.011835783 +0000 UTC m=+0.092133307 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible)
Oct 10 10:19:03 compute-1 sudo[246399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:19:03 compute-1 sudo[246399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:19:03 compute-1 sudo[246399]: pam_unix(sudo:session): session closed for user root
Oct 10 10:19:03 compute-1 sudo[246424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 10 10:19:03 compute-1 sudo[246424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:19:03 compute-1 nova_compute[235132]: 2025-10-10 10:19:03.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:03 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:19:03 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4015442307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:19:03 compute-1 nova_compute[235132]: 2025-10-10 10:19:03.294 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:19:03 compute-1 nova_compute[235132]: 2025-10-10 10:19:03.302 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:19:03 compute-1 nova_compute[235132]: 2025-10-10 10:19:03.317 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:19:03 compute-1 nova_compute[235132]: 2025-10-10 10:19:03.319 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:19:03 compute-1 nova_compute[235132]: 2025-10-10 10:19:03.320 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:19:03 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3651707172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:19:03 compute-1 ceph-mon[79167]: pgmap v993: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Oct 10 10:19:03 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/4015442307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:19:03 compute-1 podman[246523]: 2025-10-10 10:19:03.784866005 +0000 UTC m=+0.060847300 container exec 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Oct 10 10:19:03 compute-1 podman[246523]: 2025-10-10 10:19:03.911943008 +0000 UTC m=+0.187924283 container exec_died 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:19:04 compute-1 nova_compute[235132]: 2025-10-10 10:19:04.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:04 compute-1 podman[246641]: 2025-10-10 10:19:04.409584621 +0000 UTC m=+0.063033090 container exec db44b00524aaf85460d5de4878ad14e99cea939212a287f5217a7de1b31f7321 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 10:19:04 compute-1 podman[246641]: 2025-10-10 10:19:04.421870257 +0000 UTC m=+0.075318726 container exec_died db44b00524aaf85460d5de4878ad14e99cea939212a287f5217a7de1b31f7321 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 10:19:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.003000082s ======
Oct 10 10:19:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:04.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000082s
Oct 10 10:19:04 compute-1 podman[246733]: 2025-10-10 10:19:04.815000645 +0000 UTC m=+0.063567804 container exec d3cf84749d9f2f04e4804a4d648101430763ca38f98380504c4e60979dc43596 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:19:04 compute-1 podman[246733]: 2025-10-10 10:19:04.836944996 +0000 UTC m=+0.085512135 container exec_died d3cf84749d9f2f04e4804a4d648101430763ca38f98380504c4e60979dc43596 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 10 10:19:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:04.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:05 compute-1 podman[246800]: 2025-10-10 10:19:05.082806526 +0000 UTC m=+0.062366421 container exec 7ce1be4884f313700b9f3fe6c66e14b163c4d6bf31089a45b751a6389f8be562 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw)
Oct 10 10:19:05 compute-1 podman[246800]: 2025-10-10 10:19:05.092377299 +0000 UTC m=+0.071937154 container exec_died 7ce1be4884f313700b9f3fe6c66e14b163c4d6bf31089a45b751a6389f8be562 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw)
Oct 10 10:19:05 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:05 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:05 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 10 10:19:05 compute-1 podman[246866]: 2025-10-10 10:19:05.314391155 +0000 UTC m=+0.054756943 container exec 8efe4c511a9ca69c4fbe77af128c823fd37bedd1e530d3bfb1fc1044e25a5138 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, vcs-type=git, version=2.2.4, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 10 10:19:05 compute-1 podman[246866]: 2025-10-10 10:19:05.326590309 +0000 UTC m=+0.066956077 container exec_died 8efe4c511a9ca69c4fbe77af128c823fd37bedd1e530d3bfb1fc1044e25a5138 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, description=keepalived for Ceph, version=2.2.4, release=1793, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20)
Oct 10 10:19:05 compute-1 sudo[246424]: pam_unix(sudo:session): session closed for user root
Oct 10 10:19:05 compute-1 sudo[246900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:19:05 compute-1 sudo[246900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:19:05 compute-1 sudo[246900]: pam_unix(sudo:session): session closed for user root
Oct 10 10:19:05 compute-1 sudo[246925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:19:05 compute-1 sudo[246925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:19:06 compute-1 sudo[246925]: pam_unix(sudo:session): session closed for user root
Oct 10 10:19:06 compute-1 sudo[246981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:19:06 compute-1 sudo[246981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:19:06 compute-1 sudo[246981]: pam_unix(sudo:session): session closed for user root
Oct 10 10:19:06 compute-1 ceph-mon[79167]: pgmap v994: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 10 10:19:06 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:06 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:06 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:06 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:06 compute-1 sudo[247006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Oct 10 10:19:06 compute-1 sudo[247006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:19:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:06.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:06 compute-1 sudo[247006]: pam_unix(sudo:session): session closed for user root
Oct 10 10:19:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:06.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:07 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:07 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:07 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 10 10:19:07 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:07 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:07 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 10 10:19:07 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:19:07 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:19:07 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:07 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:07 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:19:07 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:19:07 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:19:07 compute-1 ceph-mon[79167]: pgmap v995: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 10 10:19:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:19:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:19:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:19:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:19:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:19:08 compute-1 nova_compute[235132]: 2025-10-10 10:19:08.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:08.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:19:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:08.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:19:09 compute-1 nova_compute[235132]: 2025-10-10 10:19:09.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:10 compute-1 ceph-mon[79167]: pgmap v996: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Oct 10 10:19:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:10.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:19:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:10.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:19:11 compute-1 sudo[247053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:19:11 compute-1 sudo[247053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:19:11 compute-1 sudo[247053]: pam_unix(sudo:session): session closed for user root
Oct 10 10:19:12 compute-1 ceph-mon[79167]: pgmap v997: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 65 op/s
Oct 10 10:19:12 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:12 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:19:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:19:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:12.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:19:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:19:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:19:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:12.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:19:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:19:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:19:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:19:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:19:13 compute-1 nova_compute[235132]: 2025-10-10 10:19:13.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:14 compute-1 ceph-mon[79167]: pgmap v998: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 126 op/s
Oct 10 10:19:14 compute-1 nova_compute[235132]: 2025-10-10 10:19:14.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:14.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:14.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:16 compute-1 ceph-mon[79167]: pgmap v999: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 10 10:19:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:19:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:16.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:19:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:16.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:17 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:19:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:19:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:19:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:19:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:19:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:19:18 compute-1 ceph-mon[79167]: pgmap v1000: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 10 10:19:18 compute-1 nova_compute[235132]: 2025-10-10 10:19:18.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:18.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:18.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:19 compute-1 nova_compute[235132]: 2025-10-10 10:19:19.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:20 compute-1 podman[247082]: 2025-10-10 10:19:20.002785421 +0000 UTC m=+0.102492593 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 10 10:19:20 compute-1 sudo[247101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:19:20 compute-1 sudo[247101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:19:20 compute-1 sudo[247101]: pam_unix(sudo:session): session closed for user root
Oct 10 10:19:20 compute-1 ceph-mon[79167]: pgmap v1001: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 10 10:19:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:20.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:19:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:20.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:19:22 compute-1 ceph-mon[79167]: pgmap v1002: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 303 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 10 10:19:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:19:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:22.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:19:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:19:22 compute-1 nova_compute[235132]: 2025-10-10 10:19:22.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:22 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:19:22.532 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:19:22 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:19:22.534 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 10 10:19:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:22.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:19:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:19:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:19:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:19:23 compute-1 nova_compute[235132]: 2025-10-10 10:19:23.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:24 compute-1 nova_compute[235132]: 2025-10-10 10:19:24.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:24 compute-1 ceph-mon[79167]: pgmap v1003: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 304 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 10 10:19:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:24.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:24.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:25 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2257682561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:19:26 compute-1 ceph-mon[79167]: pgmap v1004: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 16 KiB/s wr, 1 op/s
Oct 10 10:19:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:26.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:26.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/1758263211' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:19:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/1758263211' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:19:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:19:27 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:19:27.536 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ee0899c1-415d-4aa8-abe8-1240b4e8bf2c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:19:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:19:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:19:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:19:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:19:28 compute-1 nova_compute[235132]: 2025-10-10 10:19:28.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:28 compute-1 ceph-mon[79167]: pgmap v1005: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 16 KiB/s wr, 1 op/s
Oct 10 10:19:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:28.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:19:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:28.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:19:29 compute-1 nova_compute[235132]: 2025-10-10 10:19:29.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:30 compute-1 ceph-mon[79167]: pgmap v1006: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 17 KiB/s wr, 30 op/s
Oct 10 10:19:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:19:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:30.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:19:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:19:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:30.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:19:31 compute-1 ceph-mon[79167]: pgmap v1007: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 6.5 KiB/s wr, 29 op/s
Oct 10 10:19:31 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:19:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:19:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:32.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:19:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:19:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:32.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:19:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:19:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:19:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:19:33 compute-1 nova_compute[235132]: 2025-10-10 10:19:33.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:33 compute-1 podman[247134]: 2025-10-10 10:19:33.980691589 +0000 UTC m=+0.077464239 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct 10 10:19:33 compute-1 podman[247133]: 2025-10-10 10:19:33.991650389 +0000 UTC m=+0.084762529 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 10 10:19:34 compute-1 podman[247135]: 2025-10-10 10:19:34.113898801 +0000 UTC m=+0.203619318 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 10 10:19:34 compute-1 ceph-mon[79167]: pgmap v1008: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 6.5 KiB/s wr, 29 op/s
Oct 10 10:19:34 compute-1 nova_compute[235132]: 2025-10-10 10:19:34.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:34.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:34.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:36 compute-1 ceph-mon[79167]: pgmap v1009: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 10 10:19:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:19:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:36.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:19:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:19:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:36.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:19:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:19:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:19:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:19:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:19:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:19:38 compute-1 ceph-mon[79167]: pgmap v1010: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 10 10:19:38 compute-1 nova_compute[235132]: 2025-10-10 10:19:38.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:38.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:39.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:39 compute-1 nova_compute[235132]: 2025-10-10 10:19:39.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:40 compute-1 sudo[247200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:19:40 compute-1 sudo[247200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:19:40 compute-1 sudo[247200]: pam_unix(sudo:session): session closed for user root
Oct 10 10:19:40 compute-1 ceph-mon[79167]: pgmap v1011: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Oct 10 10:19:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:40.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:19:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:41.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:19:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:19:42.217 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:19:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:19:42.218 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:19:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:19:42.218 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:19:42 compute-1 ceph-mon[79167]: pgmap v1012: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:19:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:19:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:19:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:42.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:19:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:19:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:19:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:19:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:19:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:19:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:43.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:19:43 compute-1 nova_compute[235132]: 2025-10-10 10:19:43.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:44 compute-1 ceph-mon[79167]: pgmap v1013: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:19:44 compute-1 nova_compute[235132]: 2025-10-10 10:19:44.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:19:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:44.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:19:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:45.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:46 compute-1 ceph-mon[79167]: pgmap v1014: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:19:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:19:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:46.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:19:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:47.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:47 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:19:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:19:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:19:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:19:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:19:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:19:48 compute-1 nova_compute[235132]: 2025-10-10 10:19:48.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:48 compute-1 ceph-mon[79167]: pgmap v1015: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:19:48 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2667140307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:19:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:19:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:48.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:19:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:49.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:49 compute-1 nova_compute[235132]: 2025-10-10 10:19:49.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:50 compute-1 ceph-mon[79167]: pgmap v1016: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:19:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:50.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:50 compute-1 podman[247230]: 2025-10-10 10:19:50.965428359 +0000 UTC m=+0.066752437 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:19:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:19:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:51.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:19:51 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/4063882823' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:19:52 compute-1 ceph-mon[79167]: pgmap v1017: 353 pgs: 353 active+clean; 41 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:19:52 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1076408673' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:19:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:19:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:19:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:52.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:19:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:19:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:19:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:19:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:19:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:53.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:53 compute-1 nova_compute[235132]: 2025-10-10 10:19:53.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:54 compute-1 nova_compute[235132]: 2025-10-10 10:19:54.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:54 compute-1 ceph-mon[79167]: pgmap v1018: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:19:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:54.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:55.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:55 compute-1 nova_compute[235132]: 2025-10-10 10:19:55.320 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:19:55 compute-1 nova_compute[235132]: 2025-10-10 10:19:55.321 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:19:56 compute-1 nova_compute[235132]: 2025-10-10 10:19:56.040 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:19:56 compute-1 nova_compute[235132]: 2025-10-10 10:19:56.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:19:56 compute-1 ceph-mon[79167]: pgmap v1019: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:19:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:19:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:56.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:19:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:19:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:57.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:19:57 compute-1 nova_compute[235132]: 2025-10-10 10:19:57.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:19:57 compute-1 nova_compute[235132]: 2025-10-10 10:19:57.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:19:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:19:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:19:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:19:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:19:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:19:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:19:58 compute-1 nova_compute[235132]: 2025-10-10 10:19:58.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:19:58 compute-1 nova_compute[235132]: 2025-10-10 10:19:58.046 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:19:58 compute-1 nova_compute[235132]: 2025-10-10 10:19:58.046 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:19:58 compute-1 nova_compute[235132]: 2025-10-10 10:19:58.063 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:19:58 compute-1 nova_compute[235132]: 2025-10-10 10:19:58.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:19:58 compute-1 ceph-mon[79167]: pgmap v1020: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:19:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:19:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:19:58.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:19:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:19:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:19:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:19:59.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:19:59 compute-1 nova_compute[235132]: 2025-10-10 10:19:59.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:00 compute-1 nova_compute[235132]: 2025-10-10 10:20:00.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:20:00 compute-1 nova_compute[235132]: 2025-10-10 10:20:00.062 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:20:00 compute-1 sudo[247254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:20:00 compute-1 sudo[247254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:20:00 compute-1 sudo[247254]: pam_unix(sudo:session): session closed for user root
Oct 10 10:20:00 compute-1 ceph-mon[79167]: pgmap v1021: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 10 10:20:00 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3969596316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:20:00 compute-1 ceph-mon[79167]: overall HEALTH_OK
Oct 10 10:20:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:00.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:20:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:01.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:20:01 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3574646968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:20:01 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:20:02 compute-1 nova_compute[235132]: 2025-10-10 10:20:02.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:20:02 compute-1 ceph-mon[79167]: pgmap v1022: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 10 10:20:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:20:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:02.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:20:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:20:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:20:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:20:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:20:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:03.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:20:03 compute-1 nova_compute[235132]: 2025-10-10 10:20:03.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:20:03 compute-1 nova_compute[235132]: 2025-10-10 10:20:03.073 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:20:03 compute-1 nova_compute[235132]: 2025-10-10 10:20:03.074 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:20:03 compute-1 nova_compute[235132]: 2025-10-10 10:20:03.074 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:20:03 compute-1 nova_compute[235132]: 2025-10-10 10:20:03.075 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:20:03 compute-1 nova_compute[235132]: 2025-10-10 10:20:03.075 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:20:03 compute-1 nova_compute[235132]: 2025-10-10 10:20:03.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:03 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2835618541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:20:03 compute-1 ceph-mon[79167]: pgmap v1023: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 10 10:20:03 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:20:03 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1396745541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:20:03 compute-1 nova_compute[235132]: 2025-10-10 10:20:03.619 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:20:03 compute-1 nova_compute[235132]: 2025-10-10 10:20:03.876 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:20:03 compute-1 nova_compute[235132]: 2025-10-10 10:20:03.878 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4924MB free_disk=59.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:20:03 compute-1 nova_compute[235132]: 2025-10-10 10:20:03.878 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:20:03 compute-1 nova_compute[235132]: 2025-10-10 10:20:03.878 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:20:03 compute-1 nova_compute[235132]: 2025-10-10 10:20:03.950 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:20:03 compute-1 nova_compute[235132]: 2025-10-10 10:20:03.950 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:20:03 compute-1 nova_compute[235132]: 2025-10-10 10:20:03.969 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:20:04 compute-1 nova_compute[235132]: 2025-10-10 10:20:04.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:04 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:20:04 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/190815280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:20:04 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1396745541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:20:04 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1513687594' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:20:04 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/190815280' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:20:04 compute-1 nova_compute[235132]: 2025-10-10 10:20:04.436 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:20:04 compute-1 nova_compute[235132]: 2025-10-10 10:20:04.443 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:20:04 compute-1 nova_compute[235132]: 2025-10-10 10:20:04.468 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:20:04 compute-1 nova_compute[235132]: 2025-10-10 10:20:04.470 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:20:04 compute-1 nova_compute[235132]: 2025-10-10 10:20:04.470 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:20:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:20:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:04.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:20:04 compute-1 nova_compute[235132]: 2025-10-10 10:20:04.832 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:20:04 compute-1 nova_compute[235132]: 2025-10-10 10:20:04.833 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:20:04 compute-1 nova_compute[235132]: 2025-10-10 10:20:04.878 2 DEBUG nova.compute.manager [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 10 10:20:04 compute-1 nova_compute[235132]: 2025-10-10 10:20:04.956 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:20:04 compute-1 nova_compute[235132]: 2025-10-10 10:20:04.957 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:20:04 compute-1 nova_compute[235132]: 2025-10-10 10:20:04.966 2 DEBUG nova.virt.hardware [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 10 10:20:04 compute-1 nova_compute[235132]: 2025-10-10 10:20:04.966 2 INFO nova.compute.claims [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Claim successful on node compute-1.ctlplane.example.com
Oct 10 10:20:04 compute-1 podman[247325]: 2025-10-10 10:20:04.996776328 +0000 UTC m=+0.080625815 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3)
Oct 10 10:20:05 compute-1 podman[247326]: 2025-10-10 10:20:05.012364664 +0000 UTC m=+0.087671408 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:20:05 compute-1 podman[247327]: 2025-10-10 10:20:05.0312278 +0000 UTC m=+0.104779556 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 10 10:20:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:05.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:05 compute-1 nova_compute[235132]: 2025-10-10 10:20:05.068 2 DEBUG oslo_concurrency.processutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:20:05 compute-1 ceph-mon[79167]: pgmap v1024: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 10 10:20:05 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:20:05 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1617462293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:20:05 compute-1 nova_compute[235132]: 2025-10-10 10:20:05.525 2 DEBUG oslo_concurrency.processutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:20:05 compute-1 nova_compute[235132]: 2025-10-10 10:20:05.533 2 DEBUG nova.compute.provider_tree [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:20:05 compute-1 nova_compute[235132]: 2025-10-10 10:20:05.563 2 DEBUG nova.scheduler.client.report [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:20:05 compute-1 nova_compute[235132]: 2025-10-10 10:20:05.603 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:20:05 compute-1 nova_compute[235132]: 2025-10-10 10:20:05.604 2 DEBUG nova.compute.manager [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 10 10:20:05 compute-1 nova_compute[235132]: 2025-10-10 10:20:05.664 2 DEBUG nova.compute.manager [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 10 10:20:05 compute-1 nova_compute[235132]: 2025-10-10 10:20:05.665 2 DEBUG nova.network.neutron [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 10 10:20:05 compute-1 nova_compute[235132]: 2025-10-10 10:20:05.691 2 INFO nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 10 10:20:05 compute-1 nova_compute[235132]: 2025-10-10 10:20:05.716 2 DEBUG nova.compute.manager [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 10 10:20:05 compute-1 nova_compute[235132]: 2025-10-10 10:20:05.838 2 DEBUG nova.compute.manager [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 10 10:20:05 compute-1 nova_compute[235132]: 2025-10-10 10:20:05.840 2 DEBUG nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 10 10:20:05 compute-1 nova_compute[235132]: 2025-10-10 10:20:05.841 2 INFO nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Creating image(s)
Oct 10 10:20:05 compute-1 nova_compute[235132]: 2025-10-10 10:20:05.876 2 DEBUG nova.storage.rbd_utils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:20:05 compute-1 nova_compute[235132]: 2025-10-10 10:20:05.922 2 DEBUG nova.storage.rbd_utils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:20:05 compute-1 nova_compute[235132]: 2025-10-10 10:20:05.960 2 DEBUG nova.storage.rbd_utils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:20:05 compute-1 nova_compute[235132]: 2025-10-10 10:20:05.966 2 DEBUG oslo_concurrency.processutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:20:05 compute-1 nova_compute[235132]: 2025-10-10 10:20:05.994 2 DEBUG nova.policy [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7956778c03764aaf8906c9b435337976', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 10 10:20:06 compute-1 nova_compute[235132]: 2025-10-10 10:20:06.054 2 DEBUG oslo_concurrency.processutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:20:06 compute-1 nova_compute[235132]: 2025-10-10 10:20:06.055 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:20:06 compute-1 nova_compute[235132]: 2025-10-10 10:20:06.056 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:20:06 compute-1 nova_compute[235132]: 2025-10-10 10:20:06.056 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "eec5fe2328f977d3b1a385313e521aef425c0ac1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:20:06 compute-1 nova_compute[235132]: 2025-10-10 10:20:06.083 2 DEBUG nova.storage.rbd_utils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:20:06 compute-1 nova_compute[235132]: 2025-10-10 10:20:06.087 2 DEBUG oslo_concurrency.processutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:20:06 compute-1 nova_compute[235132]: 2025-10-10 10:20:06.380 2 DEBUG oslo_concurrency.processutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/eec5fe2328f977d3b1a385313e521aef425c0ac1 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:20:06 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1617462293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:20:06 compute-1 nova_compute[235132]: 2025-10-10 10:20:06.485 2 DEBUG nova.storage.rbd_utils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] resizing rbd image 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 10 10:20:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.002000054s ======
Oct 10 10:20:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:06.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Oct 10 10:20:06 compute-1 nova_compute[235132]: 2025-10-10 10:20:06.623 2 DEBUG nova.objects.instance [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'migration_context' on Instance uuid 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:20:06 compute-1 nova_compute[235132]: 2025-10-10 10:20:06.642 2 DEBUG nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 10 10:20:06 compute-1 nova_compute[235132]: 2025-10-10 10:20:06.642 2 DEBUG nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Ensure instance console log exists: /var/lib/nova/instances/03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 10 10:20:06 compute-1 nova_compute[235132]: 2025-10-10 10:20:06.643 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:20:06 compute-1 nova_compute[235132]: 2025-10-10 10:20:06.644 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:20:06 compute-1 nova_compute[235132]: 2025-10-10 10:20:06.644 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:20:06 compute-1 nova_compute[235132]: 2025-10-10 10:20:06.694 2 DEBUG nova.network.neutron [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Successfully created port: d7538303-305d-4e01-9d26-cff58aec5656 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 10 10:20:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:20:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:07.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:20:07 compute-1 ceph-mon[79167]: pgmap v1025: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 10 10:20:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:20:07 compute-1 nova_compute[235132]: 2025-10-10 10:20:07.691 2 DEBUG nova.network.neutron [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Successfully updated port: d7538303-305d-4e01-9d26-cff58aec5656 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 10 10:20:07 compute-1 nova_compute[235132]: 2025-10-10 10:20:07.715 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "refresh_cache-03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:20:07 compute-1 nova_compute[235132]: 2025-10-10 10:20:07.716 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquired lock "refresh_cache-03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:20:07 compute-1 nova_compute[235132]: 2025-10-10 10:20:07.717 2 DEBUG nova.network.neutron [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 10 10:20:07 compute-1 nova_compute[235132]: 2025-10-10 10:20:07.831 2 DEBUG nova.compute.manager [req-46be1474-509d-45e8-8e45-d4cae6c9b1a9 req-686846bd-a091-43d2-8bd0-72298cbd2318 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Received event network-changed-d7538303-305d-4e01-9d26-cff58aec5656 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:20:07 compute-1 nova_compute[235132]: 2025-10-10 10:20:07.832 2 DEBUG nova.compute.manager [req-46be1474-509d-45e8-8e45-d4cae6c9b1a9 req-686846bd-a091-43d2-8bd0-72298cbd2318 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Refreshing instance network info cache due to event network-changed-d7538303-305d-4e01-9d26-cff58aec5656. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 10 10:20:07 compute-1 nova_compute[235132]: 2025-10-10 10:20:07.832 2 DEBUG oslo_concurrency.lockutils [req-46be1474-509d-45e8-8e45-d4cae6c9b1a9 req-686846bd-a091-43d2-8bd0-72298cbd2318 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:20:07 compute-1 nova_compute[235132]: 2025-10-10 10:20:07.964 2 DEBUG nova.network.neutron [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 10 10:20:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:20:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:20:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:20:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:20:08 compute-1 nova_compute[235132]: 2025-10-10 10:20:08.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:20:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:08.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:20:08 compute-1 nova_compute[235132]: 2025-10-10 10:20:08.656 2 DEBUG nova.network.neutron [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Updating instance_info_cache with network_info: [{"id": "d7538303-305d-4e01-9d26-cff58aec5656", "address": "fa:16:3e:1a:a6:6c", "network": {"id": "fb3e50c5-fe48-4113-87d7-4e11945ac752", "bridge": "br-int", "label": "tempest-network-smoke--916183171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7538303-30", "ovs_interfaceid": "d7538303-305d-4e01-9d26-cff58aec5656", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:20:08 compute-1 nova_compute[235132]: 2025-10-10 10:20:08.695 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Releasing lock "refresh_cache-03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:20:08 compute-1 nova_compute[235132]: 2025-10-10 10:20:08.696 2 DEBUG nova.compute.manager [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Instance network_info: |[{"id": "d7538303-305d-4e01-9d26-cff58aec5656", "address": "fa:16:3e:1a:a6:6c", "network": {"id": "fb3e50c5-fe48-4113-87d7-4e11945ac752", "bridge": "br-int", "label": "tempest-network-smoke--916183171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7538303-30", "ovs_interfaceid": "d7538303-305d-4e01-9d26-cff58aec5656", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 10 10:20:08 compute-1 nova_compute[235132]: 2025-10-10 10:20:08.697 2 DEBUG oslo_concurrency.lockutils [req-46be1474-509d-45e8-8e45-d4cae6c9b1a9 req-686846bd-a091-43d2-8bd0-72298cbd2318 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:20:08 compute-1 nova_compute[235132]: 2025-10-10 10:20:08.697 2 DEBUG nova.network.neutron [req-46be1474-509d-45e8-8e45-d4cae6c9b1a9 req-686846bd-a091-43d2-8bd0-72298cbd2318 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Refreshing network info cache for port d7538303-305d-4e01-9d26-cff58aec5656 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 10 10:20:08 compute-1 nova_compute[235132]: 2025-10-10 10:20:08.700 2 DEBUG nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Start _get_guest_xml network_info=[{"id": "d7538303-305d-4e01-9d26-cff58aec5656", "address": "fa:16:3e:1a:a6:6c", "network": {"id": "fb3e50c5-fe48-4113-87d7-4e11945ac752", "bridge": "br-int", "label": "tempest-network-smoke--916183171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7538303-30", "ovs_interfaceid": "d7538303-305d-4e01-9d26-cff58aec5656", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-10T10:09:50Z,direct_url=<?>,disk_format='qcow2',id=5ae78700-970d-45b4-a57d-978a054c7519,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ec962e275689437d80680ff3ea69c852',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-10T10:09:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'disk_bus': 'virtio', 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'device_type': 'disk', 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'boot_index': 0, 'guest_format': None, 'image_id': '5ae78700-970d-45b4-a57d-978a054c7519'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 10 10:20:08 compute-1 nova_compute[235132]: 2025-10-10 10:20:08.707 2 WARNING nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:20:08 compute-1 nova_compute[235132]: 2025-10-10 10:20:08.712 2 DEBUG nova.virt.libvirt.host [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 10 10:20:08 compute-1 nova_compute[235132]: 2025-10-10 10:20:08.713 2 DEBUG nova.virt.libvirt.host [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 10 10:20:08 compute-1 nova_compute[235132]: 2025-10-10 10:20:08.720 2 DEBUG nova.virt.libvirt.host [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Searching host: 'compute-1.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 10 10:20:08 compute-1 nova_compute[235132]: 2025-10-10 10:20:08.720 2 DEBUG nova.virt.libvirt.host [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 10 10:20:08 compute-1 nova_compute[235132]: 2025-10-10 10:20:08.721 2 DEBUG nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 10 10:20:08 compute-1 nova_compute[235132]: 2025-10-10 10:20:08.721 2 DEBUG nova.virt.hardware [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-10T10:09:48Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='00373e71-6208-4238-ad85-db0452c53bc6',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-10T10:09:50Z,direct_url=<?>,disk_format='qcow2',id=5ae78700-970d-45b4-a57d-978a054c7519,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ec962e275689437d80680ff3ea69c852',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-10T10:09:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 10 10:20:08 compute-1 nova_compute[235132]: 2025-10-10 10:20:08.722 2 DEBUG nova.virt.hardware [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 10 10:20:08 compute-1 nova_compute[235132]: 2025-10-10 10:20:08.722 2 DEBUG nova.virt.hardware [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 10 10:20:08 compute-1 nova_compute[235132]: 2025-10-10 10:20:08.722 2 DEBUG nova.virt.hardware [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 10 10:20:08 compute-1 nova_compute[235132]: 2025-10-10 10:20:08.723 2 DEBUG nova.virt.hardware [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 10 10:20:08 compute-1 nova_compute[235132]: 2025-10-10 10:20:08.723 2 DEBUG nova.virt.hardware [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 10 10:20:08 compute-1 nova_compute[235132]: 2025-10-10 10:20:08.723 2 DEBUG nova.virt.hardware [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 10 10:20:08 compute-1 nova_compute[235132]: 2025-10-10 10:20:08.725 2 DEBUG nova.virt.hardware [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 10 10:20:08 compute-1 nova_compute[235132]: 2025-10-10 10:20:08.725 2 DEBUG nova.virt.hardware [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 10 10:20:08 compute-1 nova_compute[235132]: 2025-10-10 10:20:08.726 2 DEBUG nova.virt.hardware [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 10 10:20:08 compute-1 nova_compute[235132]: 2025-10-10 10:20:08.726 2 DEBUG nova.virt.hardware [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 10 10:20:08 compute-1 nova_compute[235132]: 2025-10-10 10:20:08.730 2 DEBUG oslo_concurrency.processutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:20:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:20:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:09.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:20:09 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 10 10:20:09 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1482921149' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:20:09 compute-1 nova_compute[235132]: 2025-10-10 10:20:09.236 2 DEBUG oslo_concurrency.processutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:20:09 compute-1 nova_compute[235132]: 2025-10-10 10:20:09.278 2 DEBUG nova.storage.rbd_utils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:20:09 compute-1 nova_compute[235132]: 2025-10-10 10:20:09.283 2 DEBUG oslo_concurrency.processutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:20:09 compute-1 nova_compute[235132]: 2025-10-10 10:20:09.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:09 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 10 10:20:09 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1189993806' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:20:09 compute-1 nova_compute[235132]: 2025-10-10 10:20:09.753 2 DEBUG oslo_concurrency.processutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:20:09 compute-1 nova_compute[235132]: 2025-10-10 10:20:09.755 2 DEBUG nova.virt.libvirt.vif [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-10T10:20:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1292020392',display_name='tempest-TestNetworkBasicOps-server-1292020392',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1292020392',id=12,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAT+YikYTBA+gJ7uw6swAHe8UXJlrkdRsMPU0KwiyyFauWaLZUlwpDJtpNi3JcVUbWLYjO0HRPwQgIDxYUsNqQN2uz9WldWafuvChAH95C9TEkm8Ni1fVouqScJtHFj6Ww==',key_name='tempest-TestNetworkBasicOps-218277133',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-3xmgv6ba',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-10T10:20:05Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d7538303-305d-4e01-9d26-cff58aec5656", "address": "fa:16:3e:1a:a6:6c", "network": {"id": "fb3e50c5-fe48-4113-87d7-4e11945ac752", "bridge": "br-int", "label": "tempest-network-smoke--916183171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7538303-30", "ovs_interfaceid": "d7538303-305d-4e01-9d26-cff58aec5656", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 10 10:20:09 compute-1 nova_compute[235132]: 2025-10-10 10:20:09.755 2 DEBUG nova.network.os_vif_util [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "d7538303-305d-4e01-9d26-cff58aec5656", "address": "fa:16:3e:1a:a6:6c", "network": {"id": "fb3e50c5-fe48-4113-87d7-4e11945ac752", "bridge": "br-int", "label": "tempest-network-smoke--916183171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7538303-30", "ovs_interfaceid": "d7538303-305d-4e01-9d26-cff58aec5656", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:20:09 compute-1 nova_compute[235132]: 2025-10-10 10:20:09.756 2 DEBUG nova.network.os_vif_util [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:a6:6c,bridge_name='br-int',has_traffic_filtering=True,id=d7538303-305d-4e01-9d26-cff58aec5656,network=Network(fb3e50c5-fe48-4113-87d7-4e11945ac752),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7538303-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:20:09 compute-1 nova_compute[235132]: 2025-10-10 10:20:09.758 2 DEBUG nova.objects.instance [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:20:09 compute-1 nova_compute[235132]: 2025-10-10 10:20:09.780 2 DEBUG nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] End _get_guest_xml xml=<domain type="kvm">
Oct 10 10:20:09 compute-1 nova_compute[235132]:   <uuid>03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3</uuid>
Oct 10 10:20:09 compute-1 nova_compute[235132]:   <name>instance-0000000c</name>
Oct 10 10:20:09 compute-1 nova_compute[235132]:   <memory>131072</memory>
Oct 10 10:20:09 compute-1 nova_compute[235132]:   <vcpu>1</vcpu>
Oct 10 10:20:09 compute-1 nova_compute[235132]:   <metadata>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 10 10:20:09 compute-1 nova_compute[235132]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:       <nova:name>tempest-TestNetworkBasicOps-server-1292020392</nova:name>
Oct 10 10:20:09 compute-1 nova_compute[235132]:       <nova:creationTime>2025-10-10 10:20:08</nova:creationTime>
Oct 10 10:20:09 compute-1 nova_compute[235132]:       <nova:flavor name="m1.nano">
Oct 10 10:20:09 compute-1 nova_compute[235132]:         <nova:memory>128</nova:memory>
Oct 10 10:20:09 compute-1 nova_compute[235132]:         <nova:disk>1</nova:disk>
Oct 10 10:20:09 compute-1 nova_compute[235132]:         <nova:swap>0</nova:swap>
Oct 10 10:20:09 compute-1 nova_compute[235132]:         <nova:ephemeral>0</nova:ephemeral>
Oct 10 10:20:09 compute-1 nova_compute[235132]:         <nova:vcpus>1</nova:vcpus>
Oct 10 10:20:09 compute-1 nova_compute[235132]:       </nova:flavor>
Oct 10 10:20:09 compute-1 nova_compute[235132]:       <nova:owner>
Oct 10 10:20:09 compute-1 nova_compute[235132]:         <nova:user uuid="7956778c03764aaf8906c9b435337976">tempest-TestNetworkBasicOps-188749107-project-member</nova:user>
Oct 10 10:20:09 compute-1 nova_compute[235132]:         <nova:project uuid="d5e531d4b440422d946eaf6fd4e166f7">tempest-TestNetworkBasicOps-188749107</nova:project>
Oct 10 10:20:09 compute-1 nova_compute[235132]:       </nova:owner>
Oct 10 10:20:09 compute-1 nova_compute[235132]:       <nova:root type="image" uuid="5ae78700-970d-45b4-a57d-978a054c7519"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:       <nova:ports>
Oct 10 10:20:09 compute-1 nova_compute[235132]:         <nova:port uuid="d7538303-305d-4e01-9d26-cff58aec5656">
Oct 10 10:20:09 compute-1 nova_compute[235132]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:         </nova:port>
Oct 10 10:20:09 compute-1 nova_compute[235132]:       </nova:ports>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     </nova:instance>
Oct 10 10:20:09 compute-1 nova_compute[235132]:   </metadata>
Oct 10 10:20:09 compute-1 nova_compute[235132]:   <sysinfo type="smbios">
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <system>
Oct 10 10:20:09 compute-1 nova_compute[235132]:       <entry name="manufacturer">RDO</entry>
Oct 10 10:20:09 compute-1 nova_compute[235132]:       <entry name="product">OpenStack Compute</entry>
Oct 10 10:20:09 compute-1 nova_compute[235132]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 10 10:20:09 compute-1 nova_compute[235132]:       <entry name="serial">03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3</entry>
Oct 10 10:20:09 compute-1 nova_compute[235132]:       <entry name="uuid">03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3</entry>
Oct 10 10:20:09 compute-1 nova_compute[235132]:       <entry name="family">Virtual Machine</entry>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     </system>
Oct 10 10:20:09 compute-1 nova_compute[235132]:   </sysinfo>
Oct 10 10:20:09 compute-1 nova_compute[235132]:   <os>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <boot dev="hd"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <smbios mode="sysinfo"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:   </os>
Oct 10 10:20:09 compute-1 nova_compute[235132]:   <features>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <acpi/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <apic/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <vmcoreinfo/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:   </features>
Oct 10 10:20:09 compute-1 nova_compute[235132]:   <clock offset="utc">
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <timer name="pit" tickpolicy="delay"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <timer name="hpet" present="no"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:   </clock>
Oct 10 10:20:09 compute-1 nova_compute[235132]:   <cpu mode="host-model" match="exact">
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <topology sockets="1" cores="1" threads="1"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:   </cpu>
Oct 10 10:20:09 compute-1 nova_compute[235132]:   <devices>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <disk type="network" device="disk">
Oct 10 10:20:09 compute-1 nova_compute[235132]:       <driver type="raw" cache="none"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:       <source protocol="rbd" name="vms/03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3_disk">
Oct 10 10:20:09 compute-1 nova_compute[235132]:         <host name="192.168.122.100" port="6789"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:         <host name="192.168.122.102" port="6789"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:         <host name="192.168.122.101" port="6789"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:       </source>
Oct 10 10:20:09 compute-1 nova_compute[235132]:       <auth username="openstack">
Oct 10 10:20:09 compute-1 nova_compute[235132]:         <secret type="ceph" uuid="21f084a3-af34-5230-afe4-ea5cd24a55f4"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:       </auth>
Oct 10 10:20:09 compute-1 nova_compute[235132]:       <target dev="vda" bus="virtio"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     </disk>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <disk type="network" device="cdrom">
Oct 10 10:20:09 compute-1 nova_compute[235132]:       <driver type="raw" cache="none"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:       <source protocol="rbd" name="vms/03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3_disk.config">
Oct 10 10:20:09 compute-1 nova_compute[235132]:         <host name="192.168.122.100" port="6789"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:         <host name="192.168.122.102" port="6789"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:         <host name="192.168.122.101" port="6789"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:       </source>
Oct 10 10:20:09 compute-1 nova_compute[235132]:       <auth username="openstack">
Oct 10 10:20:09 compute-1 nova_compute[235132]:         <secret type="ceph" uuid="21f084a3-af34-5230-afe4-ea5cd24a55f4"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:       </auth>
Oct 10 10:20:09 compute-1 nova_compute[235132]:       <target dev="sda" bus="sata"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     </disk>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <interface type="ethernet">
Oct 10 10:20:09 compute-1 nova_compute[235132]:       <mac address="fa:16:3e:1a:a6:6c"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:       <model type="virtio"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:       <driver name="vhost" rx_queue_size="512"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:       <mtu size="1442"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:       <target dev="tapd7538303-30"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     </interface>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <serial type="pty">
Oct 10 10:20:09 compute-1 nova_compute[235132]:       <log file="/var/lib/nova/instances/03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3/console.log" append="off"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     </serial>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <video>
Oct 10 10:20:09 compute-1 nova_compute[235132]:       <model type="virtio"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     </video>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <input type="tablet" bus="usb"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <rng model="virtio">
Oct 10 10:20:09 compute-1 nova_compute[235132]:       <backend model="random">/dev/urandom</backend>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     </rng>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <controller type="pci" model="pcie-root-port"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <controller type="usb" index="0"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     <memballoon model="virtio">
Oct 10 10:20:09 compute-1 nova_compute[235132]:       <stats period="10"/>
Oct 10 10:20:09 compute-1 nova_compute[235132]:     </memballoon>
Oct 10 10:20:09 compute-1 nova_compute[235132]:   </devices>
Oct 10 10:20:09 compute-1 nova_compute[235132]: </domain>
Oct 10 10:20:09 compute-1 nova_compute[235132]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 10 10:20:09 compute-1 nova_compute[235132]: 2025-10-10 10:20:09.782 2 DEBUG nova.compute.manager [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Preparing to wait for external event network-vif-plugged-d7538303-305d-4e01-9d26-cff58aec5656 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 10 10:20:09 compute-1 nova_compute[235132]: 2025-10-10 10:20:09.782 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:20:09 compute-1 nova_compute[235132]: 2025-10-10 10:20:09.783 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:20:09 compute-1 nova_compute[235132]: 2025-10-10 10:20:09.783 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:20:09 compute-1 nova_compute[235132]: 2025-10-10 10:20:09.785 2 DEBUG nova.virt.libvirt.vif [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-10T10:20:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1292020392',display_name='tempest-TestNetworkBasicOps-server-1292020392',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1292020392',id=12,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAT+YikYTBA+gJ7uw6swAHe8UXJlrkdRsMPU0KwiyyFauWaLZUlwpDJtpNi3JcVUbWLYjO0HRPwQgIDxYUsNqQN2uz9WldWafuvChAH95C9TEkm8Ni1fVouqScJtHFj6Ww==',key_name='tempest-TestNetworkBasicOps-218277133',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-3xmgv6ba',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-10T10:20:05Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d7538303-305d-4e01-9d26-cff58aec5656", "address": "fa:16:3e:1a:a6:6c", "network": {"id": "fb3e50c5-fe48-4113-87d7-4e11945ac752", "bridge": "br-int", "label": "tempest-network-smoke--916183171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7538303-30", "ovs_interfaceid": "d7538303-305d-4e01-9d26-cff58aec5656", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 10 10:20:09 compute-1 nova_compute[235132]: 2025-10-10 10:20:09.785 2 DEBUG nova.network.os_vif_util [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "d7538303-305d-4e01-9d26-cff58aec5656", "address": "fa:16:3e:1a:a6:6c", "network": {"id": "fb3e50c5-fe48-4113-87d7-4e11945ac752", "bridge": "br-int", "label": "tempest-network-smoke--916183171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7538303-30", "ovs_interfaceid": "d7538303-305d-4e01-9d26-cff58aec5656", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:20:09 compute-1 nova_compute[235132]: 2025-10-10 10:20:09.786 2 DEBUG nova.network.os_vif_util [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:a6:6c,bridge_name='br-int',has_traffic_filtering=True,id=d7538303-305d-4e01-9d26-cff58aec5656,network=Network(fb3e50c5-fe48-4113-87d7-4e11945ac752),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7538303-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:20:09 compute-1 nova_compute[235132]: 2025-10-10 10:20:09.787 2 DEBUG os_vif [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:a6:6c,bridge_name='br-int',has_traffic_filtering=True,id=d7538303-305d-4e01-9d26-cff58aec5656,network=Network(fb3e50c5-fe48-4113-87d7-4e11945ac752),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7538303-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 10 10:20:09 compute-1 nova_compute[235132]: 2025-10-10 10:20:09.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:09 compute-1 nova_compute[235132]: 2025-10-10 10:20:09.794 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:20:09 compute-1 nova_compute[235132]: 2025-10-10 10:20:09.795 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 10 10:20:09 compute-1 nova_compute[235132]: 2025-10-10 10:20:09.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:09 compute-1 nova_compute[235132]: 2025-10-10 10:20:09.800 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7538303-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:20:09 compute-1 nova_compute[235132]: 2025-10-10 10:20:09.801 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd7538303-30, col_values=(('external_ids', {'iface-id': 'd7538303-305d-4e01-9d26-cff58aec5656', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:a6:6c', 'vm-uuid': '03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:20:09 compute-1 nova_compute[235132]: 2025-10-10 10:20:09.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:09 compute-1 NetworkManager[44982]: <info>  [1760091609.8058] manager: (tapd7538303-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Oct 10 10:20:09 compute-1 nova_compute[235132]: 2025-10-10 10:20:09.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 10 10:20:09 compute-1 nova_compute[235132]: 2025-10-10 10:20:09.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:09 compute-1 nova_compute[235132]: 2025-10-10 10:20:09.818 2 INFO os_vif [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:a6:6c,bridge_name='br-int',has_traffic_filtering=True,id=d7538303-305d-4e01-9d26-cff58aec5656,network=Network(fb3e50c5-fe48-4113-87d7-4e11945ac752),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7538303-30')
Oct 10 10:20:09 compute-1 nova_compute[235132]: 2025-10-10 10:20:09.885 2 DEBUG nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 10 10:20:09 compute-1 nova_compute[235132]: 2025-10-10 10:20:09.886 2 DEBUG nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 10 10:20:09 compute-1 nova_compute[235132]: 2025-10-10 10:20:09.886 2 DEBUG nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] No VIF found with MAC fa:16:3e:1a:a6:6c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 10 10:20:09 compute-1 nova_compute[235132]: 2025-10-10 10:20:09.887 2 INFO nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Using config drive
Oct 10 10:20:09 compute-1 nova_compute[235132]: 2025-10-10 10:20:09.925 2 DEBUG nova.storage.rbd_utils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:20:10 compute-1 ceph-mon[79167]: pgmap v1026: 353 pgs: 353 active+clean; 167 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Oct 10 10:20:10 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1482921149' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:20:10 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1189993806' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:20:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:10.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:20:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:11.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:20:11 compute-1 nova_compute[235132]: 2025-10-10 10:20:11.064 2 DEBUG nova.network.neutron [req-46be1474-509d-45e8-8e45-d4cae6c9b1a9 req-686846bd-a091-43d2-8bd0-72298cbd2318 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Updated VIF entry in instance network info cache for port d7538303-305d-4e01-9d26-cff58aec5656. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 10 10:20:11 compute-1 nova_compute[235132]: 2025-10-10 10:20:11.065 2 DEBUG nova.network.neutron [req-46be1474-509d-45e8-8e45-d4cae6c9b1a9 req-686846bd-a091-43d2-8bd0-72298cbd2318 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Updating instance_info_cache with network_info: [{"id": "d7538303-305d-4e01-9d26-cff58aec5656", "address": "fa:16:3e:1a:a6:6c", "network": {"id": "fb3e50c5-fe48-4113-87d7-4e11945ac752", "bridge": "br-int", "label": "tempest-network-smoke--916183171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7538303-30", "ovs_interfaceid": "d7538303-305d-4e01-9d26-cff58aec5656", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:20:11 compute-1 nova_compute[235132]: 2025-10-10 10:20:11.089 2 DEBUG oslo_concurrency.lockutils [req-46be1474-509d-45e8-8e45-d4cae6c9b1a9 req-686846bd-a091-43d2-8bd0-72298cbd2318 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:20:11 compute-1 nova_compute[235132]: 2025-10-10 10:20:11.978 2 INFO nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Creating config drive at /var/lib/nova/instances/03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3/disk.config
Oct 10 10:20:11 compute-1 nova_compute[235132]: 2025-10-10 10:20:11.986 2 DEBUG oslo_concurrency.processutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc0j9c2po execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:20:12 compute-1 nova_compute[235132]: 2025-10-10 10:20:12.132 2 DEBUG oslo_concurrency.processutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc0j9c2po" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:20:12 compute-1 nova_compute[235132]: 2025-10-10 10:20:12.183 2 DEBUG nova.storage.rbd_utils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] rbd image 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 10 10:20:12 compute-1 nova_compute[235132]: 2025-10-10 10:20:12.188 2 DEBUG oslo_concurrency.processutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3/disk.config 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:20:12 compute-1 ceph-mon[79167]: pgmap v1027: 353 pgs: 353 active+clean; 167 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 347 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Oct 10 10:20:12 compute-1 sudo[247665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:20:12 compute-1 sudo[247665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:20:12 compute-1 sudo[247665]: pam_unix(sudo:session): session closed for user root
Oct 10 10:20:12 compute-1 sudo[247709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:20:12 compute-1 sudo[247709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:20:12 compute-1 nova_compute[235132]: 2025-10-10 10:20:12.359 2 DEBUG oslo_concurrency.processutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3/disk.config 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:20:12 compute-1 nova_compute[235132]: 2025-10-10 10:20:12.361 2 INFO nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Deleting local config drive /var/lib/nova/instances/03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3/disk.config because it was imported into RBD.
Oct 10 10:20:12 compute-1 systemd[1]: Starting libvirt secret daemon...
Oct 10 10:20:12 compute-1 systemd[1]: Started libvirt secret daemon.
Oct 10 10:20:12 compute-1 kernel: tapd7538303-30: entered promiscuous mode
Oct 10 10:20:12 compute-1 NetworkManager[44982]: <info>  [1760091612.4950] manager: (tapd7538303-30): new Tun device (/org/freedesktop/NetworkManager/Devices/61)
Oct 10 10:20:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:20:12 compute-1 nova_compute[235132]: 2025-10-10 10:20:12.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:12 compute-1 ovn_controller[131749]: 2025-10-10T10:20:12Z|00088|binding|INFO|Claiming lport d7538303-305d-4e01-9d26-cff58aec5656 for this chassis.
Oct 10 10:20:12 compute-1 ovn_controller[131749]: 2025-10-10T10:20:12Z|00089|binding|INFO|d7538303-305d-4e01-9d26-cff58aec5656: Claiming fa:16:3e:1a:a6:6c 10.100.0.12
Oct 10 10:20:12 compute-1 nova_compute[235132]: 2025-10-10 10:20:12.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:12 compute-1 nova_compute[235132]: 2025-10-10 10:20:12.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:12 compute-1 nova_compute[235132]: 2025-10-10 10:20:12.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:12 compute-1 nova_compute[235132]: 2025-10-10 10:20:12.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:12 compute-1 NetworkManager[44982]: <info>  [1760091612.5188] manager: (patch-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Oct 10 10:20:12 compute-1 NetworkManager[44982]: <info>  [1760091612.5201] manager: (patch-br-int-to-provnet-1d90fa58-74cb-4ad4-84e0-739689a69111): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.525 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:a6:6c 10.100.0.12'], port_security=['fa:16:3e:1a:a6:6c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb3e50c5-fe48-4113-87d7-4e11945ac752', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b1cafd03-311e-4cea-ac47-0377bdc1af9d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc36a9e4-a12c-4b9d-8968-49f72bde3476, chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], logical_port=d7538303-305d-4e01-9d26-cff58aec5656) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.527 141156 INFO neutron.agent.ovn.metadata.agent [-] Port d7538303-305d-4e01-9d26-cff58aec5656 in datapath fb3e50c5-fe48-4113-87d7-4e11945ac752 bound to our chassis
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.528 141156 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fb3e50c5-fe48-4113-87d7-4e11945ac752
Oct 10 10:20:12 compute-1 systemd-machined[191637]: New machine qemu-5-instance-0000000c.
Oct 10 10:20:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:12.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.547 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[57aac461-35a0-4c8d-9727-6a3ec2540824]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.549 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfb3e50c5-f1 in ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.552 238898 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfb3e50c5-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.553 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[578ab604-4bed-458e-bfd0-037968e669a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.554 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[58740491-ec9c-4557-829c-213322629d15]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.569 141275 DEBUG oslo.privsep.daemon [-] privsep: reply[41cebd4c-ef06-4878-974e-47d46f9dafc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:20:12 compute-1 systemd[1]: Started Virtual Machine qemu-5-instance-0000000c.
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.593 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[208e6a63-48fa-4952-8290-4d26f2995491]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:20:12 compute-1 nova_compute[235132]: 2025-10-10 10:20:12.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:12 compute-1 nova_compute[235132]: 2025-10-10 10:20:12.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:12 compute-1 ovn_controller[131749]: 2025-10-10T10:20:12Z|00090|binding|INFO|Setting lport d7538303-305d-4e01-9d26-cff58aec5656 ovn-installed in OVS
Oct 10 10:20:12 compute-1 ovn_controller[131749]: 2025-10-10T10:20:12Z|00091|binding|INFO|Setting lport d7538303-305d-4e01-9d26-cff58aec5656 up in Southbound
Oct 10 10:20:12 compute-1 systemd-udevd[247800]: Network interface NamePolicy= disabled on kernel command line.
Oct 10 10:20:12 compute-1 nova_compute[235132]: 2025-10-10 10:20:12.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:12 compute-1 NetworkManager[44982]: <info>  [1760091612.6326] device (tapd7538303-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 10 10:20:12 compute-1 NetworkManager[44982]: <info>  [1760091612.6335] device (tapd7538303-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.646 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[9371bb70-d364-4690-a2b3-2c47b59cc8ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:20:12 compute-1 NetworkManager[44982]: <info>  [1760091612.6566] manager: (tapfb3e50c5-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/64)
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.655 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[4917589c-28e7-4cf8-a5d0-41d08b52b84e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.692 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[e47b3dcd-a0e5-4135-9e77-4113a3b2ce84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.696 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[a1318f62-2400-44f1-a651-ab02dedfd5fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:20:12 compute-1 NetworkManager[44982]: <info>  [1760091612.7219] device (tapfb3e50c5-f0): carrier: link connected
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.731 238913 DEBUG oslo.privsep.daemon [-] privsep: reply[41d567ed-a799-4095-a866-de0b98bd0287]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.760 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[f8ae8212-e336-4d1f-9153-80035dde9613]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfb3e50c5-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:c3:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452643, 'reachable_time': 41197, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247832, 'error': None, 'target': 'ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.773 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[e9cca277-1098-41c6-8086-cef2e2689343]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe36:c3b9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452643, 'tstamp': 452643}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247834, 'error': None, 'target': 'ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.788 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[a8f8d056-774c-47bd-8e14-13af366a67e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfb3e50c5-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:c3:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452643, 'reachable_time': 41197, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 247835, 'error': None, 'target': 'ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.820 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[c0874b5a-4e35-4e62-aabd-ab86bbfff924]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.893 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[6cab77e1-f00a-4d89-aec2-cdcb69472837]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.895 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfb3e50c5-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:20:12 compute-1 sudo[247709]: pam_unix(sudo:session): session closed for user root
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.895 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.896 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfb3e50c5-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:20:12 compute-1 nova_compute[235132]: 2025-10-10 10:20:12.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:12 compute-1 kernel: tapfb3e50c5-f0: entered promiscuous mode
Oct 10 10:20:12 compute-1 NetworkManager[44982]: <info>  [1760091612.8984] manager: (tapfb3e50c5-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Oct 10 10:20:12 compute-1 nova_compute[235132]: 2025-10-10 10:20:12.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.902 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfb3e50c5-f0, col_values=(('external_ids', {'iface-id': '50744b55-fb9e-4bc1-a3e6-4ad27846c672'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:20:12 compute-1 nova_compute[235132]: 2025-10-10 10:20:12.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:12 compute-1 ovn_controller[131749]: 2025-10-10T10:20:12Z|00092|binding|INFO|Releasing lport 50744b55-fb9e-4bc1-a3e6-4ad27846c672 from this chassis (sb_readonly=0)
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.908 141156 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fb3e50c5-fe48-4113-87d7-4e11945ac752.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fb3e50c5-fe48-4113-87d7-4e11945ac752.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.909 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[e01361eb-6002-4bc2-ab8e-012405f7134a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.910 141156 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]: global
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]:     log         /dev/log local0 debug
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]:     log-tag     haproxy-metadata-proxy-fb3e50c5-fe48-4113-87d7-4e11945ac752
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]:     user        root
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]:     group       root
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]:     maxconn     1024
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]:     pidfile     /var/lib/neutron/external/pids/fb3e50c5-fe48-4113-87d7-4e11945ac752.pid.haproxy
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]:     daemon
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]: 
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]: defaults
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]:     log global
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]:     mode http
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]:     option httplog
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]:     option dontlognull
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]:     option http-server-close
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]:     option forwardfor
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]:     retries                 3
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]:     timeout http-request    30s
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]:     timeout connect         30s
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]:     timeout client          32s
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]:     timeout server          32s
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]:     timeout http-keep-alive 30s
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]: 
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]: 
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]: listen listener
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]:     bind 169.254.169.254:80
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]:     server metadata /var/lib/neutron/metadata_proxy
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]:     http-request add-header X-OVN-Network-ID fb3e50c5-fe48-4113-87d7-4e11945ac752
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 10 10:20:12 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:12.911 141156 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752', 'env', 'PROCESS_TAG=haproxy-fb3e50c5-fe48-4113-87d7-4e11945ac752', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fb3e50c5-fe48-4113-87d7-4e11945ac752.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 10 10:20:12 compute-1 nova_compute[235132]: 2025-10-10 10:20:12.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:20:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:20:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:20:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:20:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:13.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:13 compute-1 nova_compute[235132]: 2025-10-10 10:20:13.168 2 DEBUG nova.compute.manager [req-3ddc586e-21ed-4c54-8f59-0c4d957fbb40 req-1369880f-5f46-49e1-9282-d3782d097f19 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Received event network-vif-plugged-d7538303-305d-4e01-9d26-cff58aec5656 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:20:13 compute-1 nova_compute[235132]: 2025-10-10 10:20:13.169 2 DEBUG oslo_concurrency.lockutils [req-3ddc586e-21ed-4c54-8f59-0c4d957fbb40 req-1369880f-5f46-49e1-9282-d3782d097f19 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:20:13 compute-1 nova_compute[235132]: 2025-10-10 10:20:13.169 2 DEBUG oslo_concurrency.lockutils [req-3ddc586e-21ed-4c54-8f59-0c4d957fbb40 req-1369880f-5f46-49e1-9282-d3782d097f19 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:20:13 compute-1 nova_compute[235132]: 2025-10-10 10:20:13.169 2 DEBUG oslo_concurrency.lockutils [req-3ddc586e-21ed-4c54-8f59-0c4d957fbb40 req-1369880f-5f46-49e1-9282-d3782d097f19 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:20:13 compute-1 nova_compute[235132]: 2025-10-10 10:20:13.170 2 DEBUG nova.compute.manager [req-3ddc586e-21ed-4c54-8f59-0c4d957fbb40 req-1369880f-5f46-49e1-9282-d3782d097f19 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Processing event network-vif-plugged-d7538303-305d-4e01-9d26-cff58aec5656 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 10 10:20:13 compute-1 podman[247921]: 2025-10-10 10:20:13.290934007 +0000 UTC m=+0.059018435 container create c7dade801056fde8fa048a3ff53307ad38c19b768bcc569abfafc91dc556a6d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 10 10:20:13 compute-1 nova_compute[235132]: 2025-10-10 10:20:13.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:13 compute-1 systemd[1]: Started libpod-conmon-c7dade801056fde8fa048a3ff53307ad38c19b768bcc569abfafc91dc556a6d6.scope.
Oct 10 10:20:13 compute-1 podman[247921]: 2025-10-10 10:20:13.267679271 +0000 UTC m=+0.035763729 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 10 10:20:13 compute-1 systemd[1]: Started libcrun container.
Oct 10 10:20:13 compute-1 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f376d2e93cbfc6e1e776215b96aff2cab5e98ad5b3c1c6a6ebb79786c35344/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 10 10:20:13 compute-1 podman[247921]: 2025-10-10 10:20:13.384065893 +0000 UTC m=+0.152150321 container init c7dade801056fde8fa048a3ff53307ad38c19b768bcc569abfafc91dc556a6d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 10 10:20:13 compute-1 podman[247921]: 2025-10-10 10:20:13.389105721 +0000 UTC m=+0.157190149 container start c7dade801056fde8fa048a3ff53307ad38c19b768bcc569abfafc91dc556a6d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:20:13 compute-1 neutron-haproxy-ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752[247936]: [NOTICE]   (247941) : New worker (247943) forked
Oct 10 10:20:13 compute-1 neutron-haproxy-ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752[247936]: [NOTICE]   (247941) : Loading success.
Oct 10 10:20:13 compute-1 nova_compute[235132]: 2025-10-10 10:20:13.487 2 DEBUG nova.compute.manager [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 10 10:20:13 compute-1 nova_compute[235132]: 2025-10-10 10:20:13.487 2 DEBUG nova.virt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Emitting event <LifecycleEvent: 1760091613.4864576, 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:20:13 compute-1 nova_compute[235132]: 2025-10-10 10:20:13.488 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] VM Started (Lifecycle Event)
Oct 10 10:20:13 compute-1 nova_compute[235132]: 2025-10-10 10:20:13.490 2 DEBUG nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 10 10:20:13 compute-1 nova_compute[235132]: 2025-10-10 10:20:13.494 2 INFO nova.virt.libvirt.driver [-] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Instance spawned successfully.
Oct 10 10:20:13 compute-1 nova_compute[235132]: 2025-10-10 10:20:13.494 2 DEBUG nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 10 10:20:13 compute-1 nova_compute[235132]: 2025-10-10 10:20:13.518 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:20:13 compute-1 nova_compute[235132]: 2025-10-10 10:20:13.524 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 10 10:20:13 compute-1 nova_compute[235132]: 2025-10-10 10:20:13.529 2 DEBUG nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:20:13 compute-1 nova_compute[235132]: 2025-10-10 10:20:13.529 2 DEBUG nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:20:13 compute-1 nova_compute[235132]: 2025-10-10 10:20:13.530 2 DEBUG nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:20:13 compute-1 nova_compute[235132]: 2025-10-10 10:20:13.530 2 DEBUG nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:20:13 compute-1 nova_compute[235132]: 2025-10-10 10:20:13.531 2 DEBUG nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:20:13 compute-1 nova_compute[235132]: 2025-10-10 10:20:13.531 2 DEBUG nova.virt.libvirt.driver [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 10 10:20:13 compute-1 nova_compute[235132]: 2025-10-10 10:20:13.568 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 10 10:20:13 compute-1 nova_compute[235132]: 2025-10-10 10:20:13.568 2 DEBUG nova.virt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Emitting event <LifecycleEvent: 1760091613.4865792, 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:20:13 compute-1 nova_compute[235132]: 2025-10-10 10:20:13.568 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] VM Paused (Lifecycle Event)
Oct 10 10:20:13 compute-1 nova_compute[235132]: 2025-10-10 10:20:13.605 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:20:13 compute-1 nova_compute[235132]: 2025-10-10 10:20:13.609 2 DEBUG nova.virt.driver [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] Emitting event <LifecycleEvent: 1760091613.4902136, 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:20:13 compute-1 nova_compute[235132]: 2025-10-10 10:20:13.610 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] VM Resumed (Lifecycle Event)
Oct 10 10:20:13 compute-1 nova_compute[235132]: 2025-10-10 10:20:13.630 2 INFO nova.compute.manager [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Took 7.79 seconds to spawn the instance on the hypervisor.
Oct 10 10:20:13 compute-1 nova_compute[235132]: 2025-10-10 10:20:13.630 2 DEBUG nova.compute.manager [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:20:13 compute-1 nova_compute[235132]: 2025-10-10 10:20:13.641 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:20:13 compute-1 nova_compute[235132]: 2025-10-10 10:20:13.644 2 DEBUG nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 10 10:20:13 compute-1 nova_compute[235132]: 2025-10-10 10:20:13.681 2 INFO nova.compute.manager [None req-313ea2d8-1bcb-4222-8843-270537386bbb - - - - - -] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 10 10:20:13 compute-1 nova_compute[235132]: 2025-10-10 10:20:13.716 2 INFO nova.compute.manager [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Took 8.78 seconds to build instance.
Oct 10 10:20:13 compute-1 nova_compute[235132]: 2025-10-10 10:20:13.740 2 DEBUG oslo_concurrency.lockutils [None req-3d7ba5da-5859-4eaf-97c5-f07837c41744 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:20:14 compute-1 ceph-mon[79167]: pgmap v1028: 353 pgs: 353 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 348 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Oct 10 10:20:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:20:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:14.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:20:14 compute-1 nova_compute[235132]: 2025-10-10 10:20:14.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:20:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:15.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:20:15 compute-1 nova_compute[235132]: 2025-10-10 10:20:15.265 2 DEBUG nova.compute.manager [req-d7dcb1bc-21ba-4259-9e50-01788e818201 req-bd983d3e-eacd-47b5-8b74-0b279437cf08 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Received event network-vif-plugged-d7538303-305d-4e01-9d26-cff58aec5656 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:20:15 compute-1 nova_compute[235132]: 2025-10-10 10:20:15.265 2 DEBUG oslo_concurrency.lockutils [req-d7dcb1bc-21ba-4259-9e50-01788e818201 req-bd983d3e-eacd-47b5-8b74-0b279437cf08 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:20:15 compute-1 nova_compute[235132]: 2025-10-10 10:20:15.266 2 DEBUG oslo_concurrency.lockutils [req-d7dcb1bc-21ba-4259-9e50-01788e818201 req-bd983d3e-eacd-47b5-8b74-0b279437cf08 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:20:15 compute-1 nova_compute[235132]: 2025-10-10 10:20:15.266 2 DEBUG oslo_concurrency.lockutils [req-d7dcb1bc-21ba-4259-9e50-01788e818201 req-bd983d3e-eacd-47b5-8b74-0b279437cf08 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:20:15 compute-1 nova_compute[235132]: 2025-10-10 10:20:15.266 2 DEBUG nova.compute.manager [req-d7dcb1bc-21ba-4259-9e50-01788e818201 req-bd983d3e-eacd-47b5-8b74-0b279437cf08 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] No waiting events found dispatching network-vif-plugged-d7538303-305d-4e01-9d26-cff58aec5656 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:20:15 compute-1 nova_compute[235132]: 2025-10-10 10:20:15.266 2 WARNING nova.compute.manager [req-d7dcb1bc-21ba-4259-9e50-01788e818201 req-bd983d3e-eacd-47b5-8b74-0b279437cf08 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Received unexpected event network-vif-plugged-d7538303-305d-4e01-9d26-cff58aec5656 for instance with vm_state active and task_state None.
Oct 10 10:20:16 compute-1 ceph-mon[79167]: pgmap v1029: 353 pgs: 353 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 347 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Oct 10 10:20:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:20:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:20:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:20:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:20:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:20:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:20:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:20:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:20:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:20:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:16.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:20:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:17.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:20:17 compute-1 ceph-mon[79167]: pgmap v1030: 353 pgs: 353 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 4.3 MiB/s wr, 98 op/s
Oct 10 10:20:17 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:20:17 compute-1 ceph-mon[79167]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Oct 10 10:20:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:20:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:20:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:20:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:20:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:20:18 compute-1 nova_compute[235132]: 2025-10-10 10:20:18.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:20:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:18.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:20:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:19.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:19 compute-1 ceph-mon[79167]: pgmap v1031: 353 pgs: 353 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 179 op/s
Oct 10 10:20:19 compute-1 nova_compute[235132]: 2025-10-10 10:20:19.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:20 compute-1 sudo[247955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:20:20 compute-1 sudo[247955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:20:20 compute-1 sudo[247955]: pam_unix(sudo:session): session closed for user root
Oct 10 10:20:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:20.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:20 compute-1 sudo[247980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:20:20 compute-1 sudo[247980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:20:20 compute-1 sudo[247980]: pam_unix(sudo:session): session closed for user root
Oct 10 10:20:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:20:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:21.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:20:21 compute-1 ceph-mon[79167]: pgmap v1032: 353 pgs: 353 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 28 KiB/s wr, 82 op/s
Oct 10 10:20:21 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:20:21 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:20:22 compute-1 podman[248006]: 2025-10-10 10:20:22.009867903 +0000 UTC m=+0.096682555 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 10 10:20:22 compute-1 nova_compute[235132]: 2025-10-10 10:20:22.108 2 DEBUG nova.compute.manager [req-d379248c-379f-47ca-b3f6-b96a153841b4 req-61a4fe53-5b01-4894-a82c-e0eea0c60812 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Received event network-changed-d7538303-305d-4e01-9d26-cff58aec5656 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:20:22 compute-1 nova_compute[235132]: 2025-10-10 10:20:22.109 2 DEBUG nova.compute.manager [req-d379248c-379f-47ca-b3f6-b96a153841b4 req-61a4fe53-5b01-4894-a82c-e0eea0c60812 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Refreshing instance network info cache due to event network-changed-d7538303-305d-4e01-9d26-cff58aec5656. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 10 10:20:22 compute-1 nova_compute[235132]: 2025-10-10 10:20:22.109 2 DEBUG oslo_concurrency.lockutils [req-d379248c-379f-47ca-b3f6-b96a153841b4 req-61a4fe53-5b01-4894-a82c-e0eea0c60812 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:20:22 compute-1 nova_compute[235132]: 2025-10-10 10:20:22.109 2 DEBUG oslo_concurrency.lockutils [req-d379248c-379f-47ca-b3f6-b96a153841b4 req-61a4fe53-5b01-4894-a82c-e0eea0c60812 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:20:22 compute-1 nova_compute[235132]: 2025-10-10 10:20:22.109 2 DEBUG nova.network.neutron [req-d379248c-379f-47ca-b3f6-b96a153841b4 req-61a4fe53-5b01-4894-a82c-e0eea0c60812 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Refreshing network info cache for port d7538303-305d-4e01-9d26-cff58aec5656 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 10 10:20:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:20:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:22.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:20:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:20:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:20:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:20:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:23.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:23 compute-1 ceph-mon[79167]: pgmap v1033: 353 pgs: 353 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 28 KiB/s wr, 82 op/s
Oct 10 10:20:23 compute-1 nova_compute[235132]: 2025-10-10 10:20:23.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:23 compute-1 nova_compute[235132]: 2025-10-10 10:20:23.977 2 DEBUG nova.network.neutron [req-d379248c-379f-47ca-b3f6-b96a153841b4 req-61a4fe53-5b01-4894-a82c-e0eea0c60812 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Updated VIF entry in instance network info cache for port d7538303-305d-4e01-9d26-cff58aec5656. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 10 10:20:23 compute-1 nova_compute[235132]: 2025-10-10 10:20:23.977 2 DEBUG nova.network.neutron [req-d379248c-379f-47ca-b3f6-b96a153841b4 req-61a4fe53-5b01-4894-a82c-e0eea0c60812 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Updating instance_info_cache with network_info: [{"id": "d7538303-305d-4e01-9d26-cff58aec5656", "address": "fa:16:3e:1a:a6:6c", "network": {"id": "fb3e50c5-fe48-4113-87d7-4e11945ac752", "bridge": "br-int", "label": "tempest-network-smoke--916183171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7538303-30", "ovs_interfaceid": "d7538303-305d-4e01-9d26-cff58aec5656", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:20:24 compute-1 nova_compute[235132]: 2025-10-10 10:20:24.001 2 DEBUG oslo_concurrency.lockutils [req-d379248c-379f-47ca-b3f6-b96a153841b4 req-61a4fe53-5b01-4894-a82c-e0eea0c60812 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:20:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:24.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:24 compute-1 nova_compute[235132]: 2025-10-10 10:20:24.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:25.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:25 compute-1 ceph-mon[79167]: pgmap v1034: 353 pgs: 353 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 15 KiB/s wr, 81 op/s
Oct 10 10:20:26 compute-1 ovn_controller[131749]: 2025-10-10T10:20:26Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1a:a6:6c 10.100.0.12
Oct 10 10:20:26 compute-1 ovn_controller[131749]: 2025-10-10T10:20:26Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1a:a6:6c 10.100.0.12
Oct 10 10:20:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:20:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:26.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:20:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:27.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:27 compute-1 ceph-mon[79167]: pgmap v1035: 353 pgs: 353 active+clean; 167 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 15 KiB/s wr, 81 op/s
Oct 10 10:20:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/2156092452' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:20:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/2156092452' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:20:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:20:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:20:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:20:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:20:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:20:28 compute-1 nova_compute[235132]: 2025-10-10 10:20:28.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:28.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:29.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:29 compute-1 ceph-mon[79167]: pgmap v1036: 353 pgs: 353 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Oct 10 10:20:29 compute-1 nova_compute[235132]: 2025-10-10 10:20:29.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:30.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:20:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:31.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:20:31 compute-1 ceph-mon[79167]: pgmap v1037: 353 pgs: 353 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:20:32 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:20:32 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:20:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:20:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:32.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:20:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:20:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:20:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:20:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:20:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:33.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:20:33 compute-1 nova_compute[235132]: 2025-10-10 10:20:33.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:33 compute-1 ceph-mon[79167]: pgmap v1038: 353 pgs: 353 active+clean; 200 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:20:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:20:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:34.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:20:34 compute-1 nova_compute[235132]: 2025-10-10 10:20:34.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:35.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:35 compute-1 ceph-mon[79167]: pgmap v1039: 353 pgs: 353 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 10 10:20:35 compute-1 podman[248034]: 2025-10-10 10:20:35.945520514 +0000 UTC m=+0.052423234 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 10 10:20:35 compute-1 podman[248035]: 2025-10-10 10:20:35.966360294 +0000 UTC m=+0.067596620 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 10 10:20:36 compute-1 podman[248036]: 2025-10-10 10:20:36.000680413 +0000 UTC m=+0.094651060 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:20:36 compute-1 ceph-mon[79167]: pgmap v1040: 353 pgs: 353 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:20:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:36.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:20:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:37.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:20:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:20:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:20:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:20:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:20:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:20:38 compute-1 nova_compute[235132]: 2025-10-10 10:20:38.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:20:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:38.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:20:38 compute-1 nova_compute[235132]: 2025-10-10 10:20:38.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:38 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:38.890 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:20:38 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:38.892 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 10 10:20:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:20:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:39.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:20:39 compute-1 ceph-mon[79167]: pgmap v1041: 353 pgs: 353 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Oct 10 10:20:39 compute-1 nova_compute[235132]: 2025-10-10 10:20:39.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:40 compute-1 sudo[248098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:20:40 compute-1 sudo[248098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:20:40 compute-1 sudo[248098]: pam_unix(sudo:session): session closed for user root
Oct 10 10:20:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:40.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:20:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:41.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:20:41 compute-1 ceph-mon[79167]: pgmap v1042: 353 pgs: 353 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 15 KiB/s wr, 1 op/s
Oct 10 10:20:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:42.219 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:20:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:42.219 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:20:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:42.220 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:20:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:20:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:20:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:42.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:20:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:20:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:20:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:20:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:20:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:43.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:43 compute-1 ceph-mon[79167]: pgmap v1043: 353 pgs: 353 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 15 KiB/s wr, 1 op/s
Oct 10 10:20:43 compute-1 nova_compute[235132]: 2025-10-10 10:20:43.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:43 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:43.894 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ee0899c1-415d-4aa8-abe8-1240b4e8bf2c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:20:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:44.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:44 compute-1 nova_compute[235132]: 2025-10-10 10:20:44.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.109 2 DEBUG oslo_concurrency.lockutils [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:20:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:45.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.110 2 DEBUG oslo_concurrency.lockutils [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.111 2 DEBUG oslo_concurrency.lockutils [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.112 2 DEBUG oslo_concurrency.lockutils [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.112 2 DEBUG oslo_concurrency.lockutils [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.115 2 INFO nova.compute.manager [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Terminating instance
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.117 2 DEBUG nova.compute.manager [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 10 10:20:45 compute-1 kernel: tapd7538303-30 (unregistering): left promiscuous mode
Oct 10 10:20:45 compute-1 NetworkManager[44982]: <info>  [1760091645.1752] device (tapd7538303-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 10 10:20:45 compute-1 ovn_controller[131749]: 2025-10-10T10:20:45Z|00093|binding|INFO|Releasing lport d7538303-305d-4e01-9d26-cff58aec5656 from this chassis (sb_readonly=0)
Oct 10 10:20:45 compute-1 ovn_controller[131749]: 2025-10-10T10:20:45Z|00094|binding|INFO|Setting lport d7538303-305d-4e01-9d26-cff58aec5656 down in Southbound
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:45 compute-1 ovn_controller[131749]: 2025-10-10T10:20:45Z|00095|binding|INFO|Removing iface tapd7538303-30 ovn-installed in OVS
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:45 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:45.200 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:a6:6c 10.100.0.12'], port_security=['fa:16:3e:1a:a6:6c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-1.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fb3e50c5-fe48-4113-87d7-4e11945ac752', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd5e531d4b440422d946eaf6fd4e166f7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b1cafd03-311e-4cea-ac47-0377bdc1af9d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc36a9e4-a12c-4b9d-8968-49f72bde3476, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>], logical_port=d7538303-305d-4e01-9d26-cff58aec5656) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f9372fffaf0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:20:45 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:45.203 141156 INFO neutron.agent.ovn.metadata.agent [-] Port d7538303-305d-4e01-9d26-cff58aec5656 in datapath fb3e50c5-fe48-4113-87d7-4e11945ac752 unbound from our chassis
Oct 10 10:20:45 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:45.205 141156 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fb3e50c5-fe48-4113-87d7-4e11945ac752, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 10 10:20:45 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:45.207 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[4d70a234-9896-45b0-b4f6-5f91b0a0fbfc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:20:45 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:45.208 141156 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752 namespace which is not needed anymore
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:45 compute-1 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Oct 10 10:20:45 compute-1 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000c.scope: Consumed 14.911s CPU time.
Oct 10 10:20:45 compute-1 systemd-machined[191637]: Machine qemu-5-instance-0000000c terminated.
Oct 10 10:20:45 compute-1 ceph-mon[79167]: pgmap v1044: 353 pgs: 353 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 15 KiB/s wr, 2 op/s
Oct 10 10:20:45 compute-1 neutron-haproxy-ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752[247936]: [NOTICE]   (247941) : haproxy version is 2.8.14-c23fe91
Oct 10 10:20:45 compute-1 neutron-haproxy-ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752[247936]: [NOTICE]   (247941) : path to executable is /usr/sbin/haproxy
Oct 10 10:20:45 compute-1 neutron-haproxy-ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752[247936]: [WARNING]  (247941) : Exiting Master process...
Oct 10 10:20:45 compute-1 neutron-haproxy-ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752[247936]: [WARNING]  (247941) : Exiting Master process...
Oct 10 10:20:45 compute-1 neutron-haproxy-ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752[247936]: [ALERT]    (247941) : Current worker (247943) exited with code 143 (Terminated)
Oct 10 10:20:45 compute-1 neutron-haproxy-ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752[247936]: [WARNING]  (247941) : All workers exited. Exiting... (0)
Oct 10 10:20:45 compute-1 systemd[1]: libpod-c7dade801056fde8fa048a3ff53307ad38c19b768bcc569abfafc91dc556a6d6.scope: Deactivated successfully.
Oct 10 10:20:45 compute-1 podman[248152]: 2025-10-10 10:20:45.35417891 +0000 UTC m=+0.048030284 container died c7dade801056fde8fa048a3ff53307ad38c19b768bcc569abfafc91dc556a6d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.361 2 INFO nova.virt.libvirt.driver [-] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Instance destroyed successfully.
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.362 2 DEBUG nova.objects.instance [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lazy-loading 'resources' on Instance uuid 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 10 10:20:45 compute-1 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c7dade801056fde8fa048a3ff53307ad38c19b768bcc569abfafc91dc556a6d6-userdata-shm.mount: Deactivated successfully.
Oct 10 10:20:45 compute-1 systemd[1]: var-lib-containers-storage-overlay-a6f376d2e93cbfc6e1e776215b96aff2cab5e98ad5b3c1c6a6ebb79786c35344-merged.mount: Deactivated successfully.
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.388 2 DEBUG nova.virt.libvirt.vif [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-10T10:20:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1292020392',display_name='tempest-TestNetworkBasicOps-server-1292020392',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1292020392',id=12,image_ref='5ae78700-970d-45b4-a57d-978a054c7519',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAT+YikYTBA+gJ7uw6swAHe8UXJlrkdRsMPU0KwiyyFauWaLZUlwpDJtpNi3JcVUbWLYjO0HRPwQgIDxYUsNqQN2uz9WldWafuvChAH95C9TEkm8Ni1fVouqScJtHFj6Ww==',key_name='tempest-TestNetworkBasicOps-218277133',keypairs=<?>,launch_index=0,launched_at=2025-10-10T10:20:13Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d5e531d4b440422d946eaf6fd4e166f7',ramdisk_id='',reservation_id='r-3xmgv6ba',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='5ae78700-970d-45b4-a57d-978a054c7519',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-188749107',owner_user_name='tempest-TestNetworkBasicOps-188749107-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-10T10:20:13Z,user_data=None,user_id='7956778c03764aaf8906c9b435337976',uuid=03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d7538303-305d-4e01-9d26-cff58aec5656", "address": "fa:16:3e:1a:a6:6c", "network": {"id": "fb3e50c5-fe48-4113-87d7-4e11945ac752", "bridge": "br-int", "label": "tempest-network-smoke--916183171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7538303-30", "ovs_interfaceid": "d7538303-305d-4e01-9d26-cff58aec5656", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.389 2 DEBUG nova.network.os_vif_util [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converting VIF {"id": "d7538303-305d-4e01-9d26-cff58aec5656", "address": "fa:16:3e:1a:a6:6c", "network": {"id": "fb3e50c5-fe48-4113-87d7-4e11945ac752", "bridge": "br-int", "label": "tempest-network-smoke--916183171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7538303-30", "ovs_interfaceid": "d7538303-305d-4e01-9d26-cff58aec5656", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.390 2 DEBUG nova.network.os_vif_util [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1a:a6:6c,bridge_name='br-int',has_traffic_filtering=True,id=d7538303-305d-4e01-9d26-cff58aec5656,network=Network(fb3e50c5-fe48-4113-87d7-4e11945ac752),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7538303-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.391 2 DEBUG os_vif [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:a6:6c,bridge_name='br-int',has_traffic_filtering=True,id=d7538303-305d-4e01-9d26-cff58aec5656,network=Network(fb3e50c5-fe48-4113-87d7-4e11945ac752),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7538303-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.393 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7538303-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:20:45 compute-1 podman[248152]: 2025-10-10 10:20:45.397013401 +0000 UTC m=+0.090864715 container cleanup c7dade801056fde8fa048a3ff53307ad38c19b768bcc569abfafc91dc556a6d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.399 2 INFO os_vif [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:a6:6c,bridge_name='br-int',has_traffic_filtering=True,id=d7538303-305d-4e01-9d26-cff58aec5656,network=Network(fb3e50c5-fe48-4113-87d7-4e11945ac752),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7538303-30')
Oct 10 10:20:45 compute-1 systemd[1]: libpod-conmon-c7dade801056fde8fa048a3ff53307ad38c19b768bcc569abfafc91dc556a6d6.scope: Deactivated successfully.
Oct 10 10:20:45 compute-1 podman[248197]: 2025-10-10 10:20:45.470599723 +0000 UTC m=+0.048118686 container remove c7dade801056fde8fa048a3ff53307ad38c19b768bcc569abfafc91dc556a6d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 10 10:20:45 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:45.479 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[6292a6a4-2b17-43a6-8b98-64c96fb7abc3]: (4, ('Fri Oct 10 10:20:45 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752 (c7dade801056fde8fa048a3ff53307ad38c19b768bcc569abfafc91dc556a6d6)\nc7dade801056fde8fa048a3ff53307ad38c19b768bcc569abfafc91dc556a6d6\nFri Oct 10 10:20:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752 (c7dade801056fde8fa048a3ff53307ad38c19b768bcc569abfafc91dc556a6d6)\nc7dade801056fde8fa048a3ff53307ad38c19b768bcc569abfafc91dc556a6d6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:20:45 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:45.482 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[37d2d6fc-fdfc-4d09-9d96-8ae2217d16c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:20:45 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:45.484 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfb3e50c5-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:45 compute-1 kernel: tapfb3e50c5-f0: left promiscuous mode
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:45 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:45.504 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[0bf28aaf-ccef-4881-9168-6dc930b2a110]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:20:45 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:45.529 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[4018ea31-e305-42c4-b555-a5aa86b59d7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:20:45 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:45.531 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[1516dc31-a540-4579-a94c-b34dd3729b20]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:20:45 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:45.552 238898 DEBUG oslo.privsep.daemon [-] privsep: reply[59272119-2324-4e5e-b686-5b63e35b923b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452634, 'reachable_time': 15143, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248224, 'error': None, 'target': 'ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:20:45 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:45.555 141275 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fb3e50c5-fe48-4113-87d7-4e11945ac752 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 10 10:20:45 compute-1 systemd[1]: run-netns-ovnmeta\x2dfb3e50c5\x2dfe48\x2d4113\x2d87d7\x2d4e11945ac752.mount: Deactivated successfully.
Oct 10 10:20:45 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:20:45.555 141275 DEBUG oslo.privsep.daemon [-] privsep: reply[6f3e8861-4ade-4ff1-8ceb-95627cd874e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.581 2 DEBUG nova.compute.manager [req-1b6b8346-54c2-4423-97d1-0f8840116513 req-2a9100e1-d3c5-4031-b804-dc3744b3686a 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Received event network-vif-unplugged-d7538303-305d-4e01-9d26-cff58aec5656 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.582 2 DEBUG oslo_concurrency.lockutils [req-1b6b8346-54c2-4423-97d1-0f8840116513 req-2a9100e1-d3c5-4031-b804-dc3744b3686a 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.582 2 DEBUG oslo_concurrency.lockutils [req-1b6b8346-54c2-4423-97d1-0f8840116513 req-2a9100e1-d3c5-4031-b804-dc3744b3686a 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.583 2 DEBUG oslo_concurrency.lockutils [req-1b6b8346-54c2-4423-97d1-0f8840116513 req-2a9100e1-d3c5-4031-b804-dc3744b3686a 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.583 2 DEBUG nova.compute.manager [req-1b6b8346-54c2-4423-97d1-0f8840116513 req-2a9100e1-d3c5-4031-b804-dc3744b3686a 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] No waiting events found dispatching network-vif-unplugged-d7538303-305d-4e01-9d26-cff58aec5656 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.584 2 DEBUG nova.compute.manager [req-1b6b8346-54c2-4423-97d1-0f8840116513 req-2a9100e1-d3c5-4031-b804-dc3744b3686a 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Received event network-vif-unplugged-d7538303-305d-4e01-9d26-cff58aec5656 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.776 2 DEBUG nova.compute.manager [req-471ab561-9162-46df-820b-dce4a6a8b53e req-e9ff2f14-4abf-465c-8768-f8583a909661 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Received event network-changed-d7538303-305d-4e01-9d26-cff58aec5656 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.777 2 DEBUG nova.compute.manager [req-471ab561-9162-46df-820b-dce4a6a8b53e req-e9ff2f14-4abf-465c-8768-f8583a909661 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Refreshing instance network info cache due to event network-changed-d7538303-305d-4e01-9d26-cff58aec5656. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.778 2 DEBUG oslo_concurrency.lockutils [req-471ab561-9162-46df-820b-dce4a6a8b53e req-e9ff2f14-4abf-465c-8768-f8583a909661 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "refresh_cache-03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.779 2 DEBUG oslo_concurrency.lockutils [req-471ab561-9162-46df-820b-dce4a6a8b53e req-e9ff2f14-4abf-465c-8768-f8583a909661 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquired lock "refresh_cache-03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.779 2 DEBUG nova.network.neutron [req-471ab561-9162-46df-820b-dce4a6a8b53e req-e9ff2f14-4abf-465c-8768-f8583a909661 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Refreshing network info cache for port d7538303-305d-4e01-9d26-cff58aec5656 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.825 2 INFO nova.virt.libvirt.driver [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Deleting instance files /var/lib/nova/instances/03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3_del
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.827 2 INFO nova.virt.libvirt.driver [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Deletion of /var/lib/nova/instances/03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3_del complete
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.884 2 INFO nova.compute.manager [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Took 0.77 seconds to destroy the instance on the hypervisor.
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.885 2 DEBUG oslo.service.loopingcall [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.886 2 DEBUG nova.compute.manager [-] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 10 10:20:45 compute-1 nova_compute[235132]: 2025-10-10 10:20:45.886 2 DEBUG nova.network.neutron [-] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 10 10:20:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:46.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:47.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:47 compute-1 nova_compute[235132]: 2025-10-10 10:20:47.218 2 DEBUG nova.network.neutron [-] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:20:47 compute-1 nova_compute[235132]: 2025-10-10 10:20:47.240 2 INFO nova.compute.manager [-] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Took 1.35 seconds to deallocate network for instance.
Oct 10 10:20:47 compute-1 ceph-mon[79167]: pgmap v1045: 353 pgs: 353 active+clean; 200 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 15 KiB/s wr, 2 op/s
Oct 10 10:20:47 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:20:47 compute-1 nova_compute[235132]: 2025-10-10 10:20:47.289 2 DEBUG oslo_concurrency.lockutils [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:20:47 compute-1 nova_compute[235132]: 2025-10-10 10:20:47.290 2 DEBUG oslo_concurrency.lockutils [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:20:47 compute-1 nova_compute[235132]: 2025-10-10 10:20:47.346 2 DEBUG oslo_concurrency.processutils [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:20:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:20:47 compute-1 nova_compute[235132]: 2025-10-10 10:20:47.710 2 DEBUG nova.compute.manager [req-2b2cbcb0-d215-4a41-a065-229cf634f725 req-54e31f2b-f890-4f11-9c74-4ab24232c50f 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Received event network-vif-plugged-d7538303-305d-4e01-9d26-cff58aec5656 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:20:47 compute-1 nova_compute[235132]: 2025-10-10 10:20:47.711 2 DEBUG oslo_concurrency.lockutils [req-2b2cbcb0-d215-4a41-a065-229cf634f725 req-54e31f2b-f890-4f11-9c74-4ab24232c50f 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Acquiring lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:20:47 compute-1 nova_compute[235132]: 2025-10-10 10:20:47.712 2 DEBUG oslo_concurrency.lockutils [req-2b2cbcb0-d215-4a41-a065-229cf634f725 req-54e31f2b-f890-4f11-9c74-4ab24232c50f 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:20:47 compute-1 nova_compute[235132]: 2025-10-10 10:20:47.712 2 DEBUG oslo_concurrency.lockutils [req-2b2cbcb0-d215-4a41-a065-229cf634f725 req-54e31f2b-f890-4f11-9c74-4ab24232c50f 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:20:47 compute-1 nova_compute[235132]: 2025-10-10 10:20:47.712 2 DEBUG nova.compute.manager [req-2b2cbcb0-d215-4a41-a065-229cf634f725 req-54e31f2b-f890-4f11-9c74-4ab24232c50f 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] No waiting events found dispatching network-vif-plugged-d7538303-305d-4e01-9d26-cff58aec5656 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 10 10:20:47 compute-1 nova_compute[235132]: 2025-10-10 10:20:47.713 2 WARNING nova.compute.manager [req-2b2cbcb0-d215-4a41-a065-229cf634f725 req-54e31f2b-f890-4f11-9c74-4ab24232c50f 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Received unexpected event network-vif-plugged-d7538303-305d-4e01-9d26-cff58aec5656 for instance with vm_state deleted and task_state None.
Oct 10 10:20:47 compute-1 nova_compute[235132]: 2025-10-10 10:20:47.713 2 DEBUG nova.compute.manager [req-2b2cbcb0-d215-4a41-a065-229cf634f725 req-54e31f2b-f890-4f11-9c74-4ab24232c50f 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Received event network-vif-deleted-d7538303-305d-4e01-9d26-cff58aec5656 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 10 10:20:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:20:47 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2924327785' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:20:47 compute-1 nova_compute[235132]: 2025-10-10 10:20:47.818 2 DEBUG oslo_concurrency.processutils [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:20:47 compute-1 nova_compute[235132]: 2025-10-10 10:20:47.825 2 DEBUG nova.compute.provider_tree [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:20:47 compute-1 nova_compute[235132]: 2025-10-10 10:20:47.842 2 DEBUG nova.scheduler.client.report [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:20:47 compute-1 nova_compute[235132]: 2025-10-10 10:20:47.865 2 DEBUG oslo_concurrency.lockutils [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:20:47 compute-1 nova_compute[235132]: 2025-10-10 10:20:47.892 2 INFO nova.scheduler.client.report [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Deleted allocations for instance 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3
Oct 10 10:20:47 compute-1 nova_compute[235132]: 2025-10-10 10:20:47.966 2 DEBUG oslo_concurrency.lockutils [None req-93d26cab-3778-4002-a086-89483706b027 7956778c03764aaf8906c9b435337976 d5e531d4b440422d946eaf6fd4e166f7 - - default default] Lock "03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.855s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:20:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:20:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:20:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:20:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:20:48 compute-1 nova_compute[235132]: 2025-10-10 10:20:48.205 2 DEBUG nova.network.neutron [req-471ab561-9162-46df-820b-dce4a6a8b53e req-e9ff2f14-4abf-465c-8768-f8583a909661 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Updated VIF entry in instance network info cache for port d7538303-305d-4e01-9d26-cff58aec5656. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 10 10:20:48 compute-1 nova_compute[235132]: 2025-10-10 10:20:48.206 2 DEBUG nova.network.neutron [req-471ab561-9162-46df-820b-dce4a6a8b53e req-e9ff2f14-4abf-465c-8768-f8583a909661 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Updating instance_info_cache with network_info: [{"id": "d7538303-305d-4e01-9d26-cff58aec5656", "address": "fa:16:3e:1a:a6:6c", "network": {"id": "fb3e50c5-fe48-4113-87d7-4e11945ac752", "bridge": "br-int", "label": "tempest-network-smoke--916183171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d5e531d4b440422d946eaf6fd4e166f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7538303-30", "ovs_interfaceid": "d7538303-305d-4e01-9d26-cff58aec5656", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 10 10:20:48 compute-1 nova_compute[235132]: 2025-10-10 10:20:48.230 2 DEBUG oslo_concurrency.lockutils [req-471ab561-9162-46df-820b-dce4a6a8b53e req-e9ff2f14-4abf-465c-8768-f8583a909661 3358614a6ba84b89b10fe1d06ba95d87 4c8b489a4ba64bf4a262e05dd1b12019 - - default default] Releasing lock "refresh_cache-03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 10 10:20:48 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2924327785' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:20:48 compute-1 nova_compute[235132]: 2025-10-10 10:20:48.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:48.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:20:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:49.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:20:49 compute-1 ceph-mon[79167]: pgmap v1046: 353 pgs: 353 active+clean; 121 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 20 KiB/s wr, 30 op/s
Oct 10 10:20:50 compute-1 nova_compute[235132]: 2025-10-10 10:20:50.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:50.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:20:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:51.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:20:51 compute-1 ceph-mon[79167]: pgmap v1047: 353 pgs: 353 active+clean; 121 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 5.5 KiB/s wr, 29 op/s
Oct 10 10:20:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:20:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:52.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:52 compute-1 podman[248251]: 2025-10-10 10:20:52.993734853 +0000 UTC m=+0.092925722 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 10 10:20:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:20:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:20:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:20:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:20:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:53.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:53 compute-1 ceph-mon[79167]: pgmap v1048: 353 pgs: 353 active+clean; 121 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 5.5 KiB/s wr, 29 op/s
Oct 10 10:20:53 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2442865633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:20:53 compute-1 nova_compute[235132]: 2025-10-10 10:20:53.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:54.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:20:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:55.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:20:55 compute-1 ceph-mon[79167]: pgmap v1049: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 7.7 KiB/s wr, 58 op/s
Oct 10 10:20:55 compute-1 nova_compute[235132]: 2025-10-10 10:20:55.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:56.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:20:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:57.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:20:57 compute-1 ceph-mon[79167]: pgmap v1050: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 7.7 KiB/s wr, 57 op/s
Oct 10 10:20:57 compute-1 nova_compute[235132]: 2025-10-10 10:20:57.471 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:20:57 compute-1 nova_compute[235132]: 2025-10-10 10:20:57.472 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:20:57 compute-1 nova_compute[235132]: 2025-10-10 10:20:57.472 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:20:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:20:57 compute-1 nova_compute[235132]: 2025-10-10 10:20:57.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:20:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:20:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:20:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:20:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:20:58 compute-1 nova_compute[235132]: 2025-10-10 10:20:58.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:58 compute-1 nova_compute[235132]: 2025-10-10 10:20:58.041 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:20:58 compute-1 nova_compute[235132]: 2025-10-10 10:20:58.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:20:58 compute-1 nova_compute[235132]: 2025-10-10 10:20:58.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:20:58 compute-1 nova_compute[235132]: 2025-10-10 10:20:58.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:20:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:20:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:20:58.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:20:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:20:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:20:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:20:59.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:20:59 compute-1 ceph-mon[79167]: pgmap v1051: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 7.7 KiB/s wr, 57 op/s
Oct 10 10:20:59 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/3082504773' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:21:00 compute-1 nova_compute[235132]: 2025-10-10 10:21:00.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:21:00 compute-1 nova_compute[235132]: 2025-10-10 10:21:00.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:21:00 compute-1 nova_compute[235132]: 2025-10-10 10:21:00.046 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:21:00 compute-1 nova_compute[235132]: 2025-10-10 10:21:00.063 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:21:00 compute-1 nova_compute[235132]: 2025-10-10 10:21:00.063 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:21:00 compute-1 nova_compute[235132]: 2025-10-10 10:21:00.064 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 10 10:21:00 compute-1 nova_compute[235132]: 2025-10-10 10:21:00.354 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760091645.3541813, 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 10 10:21:00 compute-1 nova_compute[235132]: 2025-10-10 10:21:00.355 2 INFO nova.compute.manager [-] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] VM Stopped (Lifecycle Event)
Oct 10 10:21:00 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/4267748783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:21:00 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/4120952754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:21:00 compute-1 nova_compute[235132]: 2025-10-10 10:21:00.391 2 DEBUG nova.compute.manager [None req-d491c3e9-5712-4726-b2bf-4a5472c97a1e - - - - - -] [instance: 03dc9d7f-d5ca-4332-ab0e-dfee2ad55cb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 10 10:21:00 compute-1 nova_compute[235132]: 2025-10-10 10:21:00.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:00 compute-1 sudo[248276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:21:00 compute-1 sudo[248276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:21:00 compute-1 sudo[248276]: pam_unix(sudo:session): session closed for user root
Oct 10 10:21:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:00.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:01 compute-1 nova_compute[235132]: 2025-10-10 10:21:01.066 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:21:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:01.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:01 compute-1 ceph-mon[79167]: pgmap v1052: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 29 op/s
Oct 10 10:21:01 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1358297504' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:21:01 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:21:02 compute-1 nova_compute[235132]: 2025-10-10 10:21:02.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:21:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:21:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:21:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:02.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:21:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:21:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:21:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:21:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:21:03 compute-1 nova_compute[235132]: 2025-10-10 10:21:03.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:21:03 compute-1 nova_compute[235132]: 2025-10-10 10:21:03.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 10 10:21:03 compute-1 nova_compute[235132]: 2025-10-10 10:21:03.074 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 10 10:21:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:21:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:03.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:21:03 compute-1 nova_compute[235132]: 2025-10-10 10:21:03.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:03 compute-1 ceph-mon[79167]: pgmap v1053: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 29 op/s
Oct 10 10:21:04 compute-1 nova_compute[235132]: 2025-10-10 10:21:04.075 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:21:04 compute-1 nova_compute[235132]: 2025-10-10 10:21:04.103 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:21:04 compute-1 nova_compute[235132]: 2025-10-10 10:21:04.104 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:21:04 compute-1 nova_compute[235132]: 2025-10-10 10:21:04.104 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:21:04 compute-1 nova_compute[235132]: 2025-10-10 10:21:04.104 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:21:04 compute-1 nova_compute[235132]: 2025-10-10 10:21:04.105 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:21:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:04.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:04 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:21:04 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3705335326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:21:04 compute-1 nova_compute[235132]: 2025-10-10 10:21:04.667 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:21:04 compute-1 nova_compute[235132]: 2025-10-10 10:21:04.838 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:21:04 compute-1 nova_compute[235132]: 2025-10-10 10:21:04.840 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4936MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:21:04 compute-1 nova_compute[235132]: 2025-10-10 10:21:04.840 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:21:04 compute-1 nova_compute[235132]: 2025-10-10 10:21:04.840 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:21:04 compute-1 nova_compute[235132]: 2025-10-10 10:21:04.935 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:21:04 compute-1 nova_compute[235132]: 2025-10-10 10:21:04.935 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:21:05 compute-1 nova_compute[235132]: 2025-10-10 10:21:05.002 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Refreshing inventories for resource provider c9b2c4a3-cb19-4387-8719-36027e3cdaec _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 10 10:21:05 compute-1 nova_compute[235132]: 2025-10-10 10:21:05.096 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Updating ProviderTree inventory for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 10 10:21:05 compute-1 nova_compute[235132]: 2025-10-10 10:21:05.097 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Updating inventory in ProviderTree for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 10 10:21:05 compute-1 nova_compute[235132]: 2025-10-10 10:21:05.119 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Refreshing aggregate associations for resource provider c9b2c4a3-cb19-4387-8719-36027e3cdaec, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 10 10:21:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:21:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:05.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:21:05 compute-1 nova_compute[235132]: 2025-10-10 10:21:05.155 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Refreshing trait associations for resource provider c9b2c4a3-cb19-4387-8719-36027e3cdaec, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,HW_CPU_X86_AVX,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE42,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 10 10:21:05 compute-1 nova_compute[235132]: 2025-10-10 10:21:05.178 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:21:05 compute-1 ceph-mon[79167]: pgmap v1054: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 29 op/s
Oct 10 10:21:05 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3705335326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:21:05 compute-1 nova_compute[235132]: 2025-10-10 10:21:05.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:05 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:21:05 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/127293249' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:21:05 compute-1 nova_compute[235132]: 2025-10-10 10:21:05.638 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:21:05 compute-1 nova_compute[235132]: 2025-10-10 10:21:05.646 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:21:05 compute-1 nova_compute[235132]: 2025-10-10 10:21:05.675 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:21:05 compute-1 nova_compute[235132]: 2025-10-10 10:21:05.699 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:21:05 compute-1 nova_compute[235132]: 2025-10-10 10:21:05.700 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:21:05 compute-1 nova_compute[235132]: 2025-10-10 10:21:05.701 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:21:06 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/127293249' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:21:06 compute-1 ceph-mon[79167]: pgmap v1055: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:21:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:06.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:06 compute-1 podman[248349]: 2025-10-10 10:21:06.965991467 +0000 UTC m=+0.070626192 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 10 10:21:06 compute-1 podman[248350]: 2025-10-10 10:21:06.980301989 +0000 UTC m=+0.076520444 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 10 10:21:07 compute-1 podman[248351]: 2025-10-10 10:21:07.002675641 +0000 UTC m=+0.103789390 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:21:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:07.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:21:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:21:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:21:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:21:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:21:08 compute-1 nova_compute[235132]: 2025-10-10 10:21:08.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:21:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:08.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:21:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:09.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:09 compute-1 ceph-mon[79167]: pgmap v1056: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:21:10 compute-1 nova_compute[235132]: 2025-10-10 10:21:10.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:21:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:10.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:21:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:21:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:11.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:21:11 compute-1 ceph-mon[79167]: pgmap v1057: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:21:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:21:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:21:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:12.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:21:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:21:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:21:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:21:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:21:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:13.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:13 compute-1 ceph-mon[79167]: pgmap v1058: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:21:13 compute-1 nova_compute[235132]: 2025-10-10 10:21:13.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:21:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:14.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:21:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:15.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:15 compute-1 ceph-mon[79167]: pgmap v1059: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:21:15 compute-1 nova_compute[235132]: 2025-10-10 10:21:15.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:16.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:17.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:17 compute-1 ceph-mon[79167]: pgmap v1060: 353 pgs: 353 active+clean; 41 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:21:17 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:21:17 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/463243596' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:21:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:21:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:21:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:21:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:21:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:21:18 compute-1 nova_compute[235132]: 2025-10-10 10:21:18.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:21:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:18.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:21:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:19.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:19 compute-1 ceph-mon[79167]: pgmap v1061: 353 pgs: 353 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.8 MiB/s wr, 20 op/s
Oct 10 10:21:20 compute-1 nova_compute[235132]: 2025-10-10 10:21:20.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:20.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:20 compute-1 sudo[248417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:21:20 compute-1 sudo[248417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:21:20 compute-1 sudo[248417]: pam_unix(sudo:session): session closed for user root
Oct 10 10:21:21 compute-1 sudo[248442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:21:21 compute-1 sudo[248442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:21:21 compute-1 sudo[248442]: pam_unix(sudo:session): session closed for user root
Oct 10 10:21:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:21:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:21.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:21:21 compute-1 sudo[248467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:21:21 compute-1 sudo[248467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:21:21 compute-1 ceph-mon[79167]: pgmap v1062: 353 pgs: 353 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.8 MiB/s wr, 19 op/s
Oct 10 10:21:21 compute-1 sudo[248467]: pam_unix(sudo:session): session closed for user root
Oct 10 10:21:22 compute-1 unix_chkpwd[248525]: password check failed for user (root)
Oct 10 10:21:22 compute-1 sshd-session[248491]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.217  user=root
Oct 10 10:21:22 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:21:22 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:21:22 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:21:22 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:21:22 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:21:22 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:21:22 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:21:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:21:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:22.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:21:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:21:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:21:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:21:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:23.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:23 compute-1 ceph-mon[79167]: pgmap v1063: 353 pgs: 353 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.8 MiB/s wr, 20 op/s
Oct 10 10:21:23 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3399503447' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:21:23 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1112813609' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 10 10:21:23 compute-1 nova_compute[235132]: 2025-10-10 10:21:23.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:23 compute-1 sshd-session[248491]: Failed password for root from 193.46.255.217 port 52258 ssh2
Oct 10 10:21:23 compute-1 podman[248528]: 2025-10-10 10:21:23.998075193 +0000 UTC m=+0.092107850 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:21:24 compute-1 unix_chkpwd[248547]: password check failed for user (root)
Oct 10 10:21:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:24.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:21:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:25.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:21:25 compute-1 ceph-mon[79167]: pgmap v1064: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 10 10:21:25 compute-1 nova_compute[235132]: 2025-10-10 10:21:25.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:26 compute-1 sshd-session[248491]: Failed password for root from 193.46.255.217 port 52258 ssh2
Oct 10 10:21:26 compute-1 unix_chkpwd[248573]: password check failed for user (root)
Oct 10 10:21:26 compute-1 sudo[248549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:21:26 compute-1 sudo[248549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:21:26 compute-1 sudo[248549]: pam_unix(sudo:session): session closed for user root
Oct 10 10:21:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:26.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:21:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:27.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:21:27 compute-1 ceph-mon[79167]: pgmap v1065: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 10 10:21:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/213170853' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:21:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/213170853' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:21:27 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:21:27 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:21:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:21:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:21:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:21:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:21:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:21:28 compute-1 nova_compute[235132]: 2025-10-10 10:21:28.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:28 compute-1 ceph-mon[79167]: pgmap v1066: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Oct 10 10:21:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:28.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:28 compute-1 sshd-session[248491]: Failed password for root from 193.46.255.217 port 52258 ssh2
Oct 10 10:21:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:21:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:29.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:21:30 compute-1 nova_compute[235132]: 2025-10-10 10:21:30.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:30.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:30 compute-1 ceph-mon[79167]: pgmap v1067: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 84 op/s
Oct 10 10:21:31 compute-1 sshd-session[248491]: Received disconnect from 193.46.255.217 port 52258:11:  [preauth]
Oct 10 10:21:31 compute-1 sshd-session[248491]: Disconnected from authenticating user root 193.46.255.217 port 52258 [preauth]
Oct 10 10:21:31 compute-1 sshd-session[248491]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.217  user=root
Oct 10 10:21:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:31.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:31 compute-1 unix_chkpwd[248580]: password check failed for user (root)
Oct 10 10:21:31 compute-1 sshd-session[248577]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.217  user=root
Oct 10 10:21:31 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:21:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:21:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:32.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:21:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:21:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:21:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:21:33 compute-1 ceph-mon[79167]: pgmap v1068: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 84 op/s
Oct 10 10:21:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:33.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:33 compute-1 nova_compute[235132]: 2025-10-10 10:21:33.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:33 compute-1 sshd-session[248577]: Failed password for root from 193.46.255.217 port 36286 ssh2
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0.
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:21:34.036547) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091694036608, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2363, "num_deletes": 251, "total_data_size": 6358855, "memory_usage": 6441800, "flush_reason": "Manual Compaction"}
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091694060426, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 4092731, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31199, "largest_seqno": 33557, "table_properties": {"data_size": 4083132, "index_size": 6029, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20048, "raw_average_key_size": 20, "raw_value_size": 4063987, "raw_average_value_size": 4155, "num_data_blocks": 259, "num_entries": 978, "num_filter_entries": 978, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760091492, "oldest_key_time": 1760091492, "file_creation_time": 1760091694, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 24106 microseconds, and 16293 cpu microseconds.
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:21:34.060643) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 4092731 bytes OK
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:21:34.060716) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:21:34.062511) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:21:34.062530) EVENT_LOG_v1 {"time_micros": 1760091694062524, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:21:34.062558) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 6348327, prev total WAL file size 6348327, number of live WAL files 2.
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:21:34.064996) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(3996KB)], [60(11MB)]
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091694065089, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 16141638, "oldest_snapshot_seqno": -1}
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 6211 keys, 14033198 bytes, temperature: kUnknown
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091694146245, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 14033198, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13992624, "index_size": 23952, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15557, "raw_key_size": 159069, "raw_average_key_size": 25, "raw_value_size": 13881683, "raw_average_value_size": 2235, "num_data_blocks": 964, "num_entries": 6211, "num_filter_entries": 6211, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760091694, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:21:34.146813) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 14033198 bytes
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:21:34.148386) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 198.3 rd, 172.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 11.5 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(7.4) write-amplify(3.4) OK, records in: 6732, records dropped: 521 output_compression: NoCompression
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:21:34.148420) EVENT_LOG_v1 {"time_micros": 1760091694148404, "job": 36, "event": "compaction_finished", "compaction_time_micros": 81388, "compaction_time_cpu_micros": 52495, "output_level": 6, "num_output_files": 1, "total_output_size": 14033198, "num_input_records": 6732, "num_output_records": 6211, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091694150764, "job": 36, "event": "table_file_deletion", "file_number": 62}
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091694155677, "job": 36, "event": "table_file_deletion", "file_number": 60}
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:21:34.064842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:21:34.155871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:21:34.155878) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:21:34.155879) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:21:34.155881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:21:34 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:21:34.155882) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:21:34 compute-1 unix_chkpwd[248582]: password check failed for user (root)
Oct 10 10:21:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:34.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:35 compute-1 ceph-mon[79167]: pgmap v1069: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 83 op/s
Oct 10 10:21:35 compute-1 ovn_controller[131749]: 2025-10-10T10:21:35Z|00096|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Oct 10 10:21:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:35.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:35 compute-1 nova_compute[235132]: 2025-10-10 10:21:35.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:36 compute-1 sshd-session[248577]: Failed password for root from 193.46.255.217 port 36286 ssh2
Oct 10 10:21:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:21:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:36.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:21:37 compute-1 ceph-mon[79167]: pgmap v1070: 353 pgs: 353 active+clean; 88 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 10 10:21:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:37.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:21:37 compute-1 podman[248586]: 2025-10-10 10:21:37.994314651 +0000 UTC m=+0.084370937 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS)
Oct 10 10:21:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:21:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:21:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:21:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:21:38 compute-1 podman[248587]: 2025-10-10 10:21:38.010498584 +0000 UTC m=+0.093003213 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct 10 10:21:38 compute-1 podman[248588]: 2025-10-10 10:21:38.024903488 +0000 UTC m=+0.113967607 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 10 10:21:38 compute-1 nova_compute[235132]: 2025-10-10 10:21:38.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:38.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:38 compute-1 unix_chkpwd[248650]: password check failed for user (root)
Oct 10 10:21:39 compute-1 ceph-mon[79167]: pgmap v1071: 353 pgs: 353 active+clean; 113 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Oct 10 10:21:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:21:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:39.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:21:40 compute-1 nova_compute[235132]: 2025-10-10 10:21:40.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:21:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:40.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:21:40 compute-1 sudo[248652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:21:40 compute-1 sudo[248652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:21:40 compute-1 sudo[248652]: pam_unix(sudo:session): session closed for user root
Oct 10 10:21:40 compute-1 sshd-session[248577]: Failed password for root from 193.46.255.217 port 36286 ssh2
Oct 10 10:21:41 compute-1 ceph-mon[79167]: pgmap v1072: 353 pgs: 353 active+clean; 113 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 292 KiB/s rd, 2.1 MiB/s wr, 52 op/s
Oct 10 10:21:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:41.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:21:42.220 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:21:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:21:42.221 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:21:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:21:42.221 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:21:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:21:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:42.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:21:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:21:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:21:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:21:43 compute-1 ceph-mon[79167]: pgmap v1073: 353 pgs: 353 active+clean; 113 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 292 KiB/s rd, 2.1 MiB/s wr, 52 op/s
Oct 10 10:21:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:21:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:43.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:21:43 compute-1 sshd-session[248577]: Received disconnect from 193.46.255.217 port 36286:11:  [preauth]
Oct 10 10:21:43 compute-1 sshd-session[248577]: Disconnected from authenticating user root 193.46.255.217 port 36286 [preauth]
Oct 10 10:21:43 compute-1 sshd-session[248577]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.217  user=root
Oct 10 10:21:43 compute-1 nova_compute[235132]: 2025-10-10 10:21:43.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:44 compute-1 unix_chkpwd[248681]: password check failed for user (root)
Oct 10 10:21:44 compute-1 sshd-session[248678]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.217  user=root
Oct 10 10:21:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.003000081s ======
Oct 10 10:21:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:44.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000081s
Oct 10 10:21:45 compute-1 ceph-mon[79167]: pgmap v1074: 353 pgs: 353 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 10 10:21:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:21:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:45.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:21:45 compute-1 nova_compute[235132]: 2025-10-10 10:21:45.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:45 compute-1 nova_compute[235132]: 2025-10-10 10:21:45.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:45 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:21:45.836 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:21:45 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:21:45.837 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 10 10:21:45 compute-1 sshd-session[248678]: Failed password for root from 193.46.255.217 port 58724 ssh2
Oct 10 10:21:45 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:21:45.839 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ee0899c1-415d-4aa8-abe8-1240b4e8bf2c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:21:46 compute-1 unix_chkpwd[248683]: password check failed for user (root)
Oct 10 10:21:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:21:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:46.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:21:47 compute-1 ceph-mon[79167]: pgmap v1075: 353 pgs: 353 active+clean; 121 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 10 10:21:47 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:21:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:47.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:21:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:21:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:21:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:21:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:21:48 compute-1 sshd-session[248678]: Failed password for root from 193.46.255.217 port 58724 ssh2
Oct 10 10:21:48 compute-1 nova_compute[235132]: 2025-10-10 10:21:48.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:21:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:48.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:21:48 compute-1 unix_chkpwd[248685]: password check failed for user (root)
Oct 10 10:21:49 compute-1 ceph-mon[79167]: pgmap v1076: 353 pgs: 353 active+clean; 41 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 339 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Oct 10 10:21:49 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/991923566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:21:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:21:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:49.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:21:50 compute-1 sshd-session[248678]: Failed password for root from 193.46.255.217 port 58724 ssh2
Oct 10 10:21:50 compute-1 nova_compute[235132]: 2025-10-10 10:21:50.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:50.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:51 compute-1 ceph-mon[79167]: pgmap v1077: 353 pgs: 353 active+clean; 41 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 51 KiB/s wr, 31 op/s
Oct 10 10:21:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:51.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:51 compute-1 sshd-session[248678]: Received disconnect from 193.46.255.217 port 58724:11:  [preauth]
Oct 10 10:21:51 compute-1 sshd-session[248678]: Disconnected from authenticating user root 193.46.255.217 port 58724 [preauth]
Oct 10 10:21:51 compute-1 sshd-session[248678]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=193.46.255.217  user=root
Oct 10 10:21:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:21:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:52.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:21:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:21:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:21:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:21:53 compute-1 ceph-mon[79167]: pgmap v1078: 353 pgs: 353 active+clean; 41 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 51 KiB/s wr, 31 op/s
Oct 10 10:21:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:53.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:53 compute-1 nova_compute[235132]: 2025-10-10 10:21:53.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:21:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:54.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:21:54 compute-1 podman[248689]: 2025-10-10 10:21:54.976900224 +0000 UTC m=+0.068162605 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:21:55 compute-1 ceph-mon[79167]: pgmap v1079: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 52 KiB/s wr, 41 op/s
Oct 10 10:21:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:55.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:55 compute-1 nova_compute[235132]: 2025-10-10 10:21:55.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:21:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:56.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:21:57 compute-1 ceph-mon[79167]: pgmap v1080: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 29 op/s
Oct 10 10:21:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:57.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:21:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:21:57 compute-1 nova_compute[235132]: 2025-10-10 10:21:57.686 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:21:57 compute-1 nova_compute[235132]: 2025-10-10 10:21:57.687 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:21:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:21:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:21:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:21:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:21:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:21:58 compute-1 nova_compute[235132]: 2025-10-10 10:21:58.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:21:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:21:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:21:58.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:21:59 compute-1 nova_compute[235132]: 2025-10-10 10:21:59.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:21:59 compute-1 nova_compute[235132]: 2025-10-10 10:21:59.046 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:21:59 compute-1 ceph-mon[79167]: pgmap v1081: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 13 KiB/s wr, 29 op/s
Oct 10 10:21:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:21:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:21:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:21:59.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:00 compute-1 nova_compute[235132]: 2025-10-10 10:22:00.041 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:22:00 compute-1 nova_compute[235132]: 2025-10-10 10:22:00.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:22:00 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/4151122925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:22:00 compute-1 nova_compute[235132]: 2025-10-10 10:22:00.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:00.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:00 compute-1 sudo[248712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:22:00 compute-1 sudo[248712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:22:00 compute-1 sudo[248712]: pam_unix(sudo:session): session closed for user root
Oct 10 10:22:01 compute-1 nova_compute[235132]: 2025-10-10 10:22:01.040 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:22:01 compute-1 nova_compute[235132]: 2025-10-10 10:22:01.069 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:22:01 compute-1 nova_compute[235132]: 2025-10-10 10:22:01.069 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:22:01 compute-1 nova_compute[235132]: 2025-10-10 10:22:01.070 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:22:01 compute-1 nova_compute[235132]: 2025-10-10 10:22:01.095 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:22:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:22:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:01.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:22:01 compute-1 ceph-mon[79167]: pgmap v1082: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 596 B/s wr, 11 op/s
Oct 10 10:22:01 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2003623' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:22:02 compute-1 nova_compute[235132]: 2025-10-10 10:22:02.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:22:02 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:22:02 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2212575521' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:22:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:22:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:22:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:02.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:22:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:22:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:22:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:22:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:22:03 compute-1 nova_compute[235132]: 2025-10-10 10:22:03.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:22:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:22:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:03.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:22:03 compute-1 ceph-mon[79167]: pgmap v1083: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 597 B/s wr, 11 op/s
Oct 10 10:22:03 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2094992256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:22:03 compute-1 nova_compute[235132]: 2025-10-10 10:22:03.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:04.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:05.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:05 compute-1 ceph-mon[79167]: pgmap v1084: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 597 B/s wr, 11 op/s
Oct 10 10:22:05 compute-1 nova_compute[235132]: 2025-10-10 10:22:05.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:06 compute-1 nova_compute[235132]: 2025-10-10 10:22:06.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:22:06 compute-1 nova_compute[235132]: 2025-10-10 10:22:06.097 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:22:06 compute-1 nova_compute[235132]: 2025-10-10 10:22:06.098 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:22:06 compute-1 nova_compute[235132]: 2025-10-10 10:22:06.099 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:22:06 compute-1 nova_compute[235132]: 2025-10-10 10:22:06.099 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:22:06 compute-1 nova_compute[235132]: 2025-10-10 10:22:06.100 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:22:06 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:22:06 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1956938146' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:22:06 compute-1 nova_compute[235132]: 2025-10-10 10:22:06.603 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:22:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:22:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:06.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:22:06 compute-1 nova_compute[235132]: 2025-10-10 10:22:06.792 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:22:06 compute-1 nova_compute[235132]: 2025-10-10 10:22:06.794 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4907MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:22:06 compute-1 nova_compute[235132]: 2025-10-10 10:22:06.794 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:22:06 compute-1 nova_compute[235132]: 2025-10-10 10:22:06.794 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:22:06 compute-1 nova_compute[235132]: 2025-10-10 10:22:06.869 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:22:06 compute-1 nova_compute[235132]: 2025-10-10 10:22:06.869 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:22:06 compute-1 nova_compute[235132]: 2025-10-10 10:22:06.886 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:22:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:07.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:07 compute-1 ceph-mon[79167]: pgmap v1085: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:07 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1956938146' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:22:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:22:07 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3246728595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:22:07 compute-1 nova_compute[235132]: 2025-10-10 10:22:07.349 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:22:07 compute-1 nova_compute[235132]: 2025-10-10 10:22:07.355 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:22:07 compute-1 nova_compute[235132]: 2025-10-10 10:22:07.371 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:22:07 compute-1 nova_compute[235132]: 2025-10-10 10:22:07.373 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:22:07 compute-1 nova_compute[235132]: 2025-10-10 10:22:07.373 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:22:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:22:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:22:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:22:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:22:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:22:08 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3246728595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:22:08 compute-1 nova_compute[235132]: 2025-10-10 10:22:08.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:08.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:09 compute-1 podman[248785]: 2025-10-10 10:22:09.01156206 +0000 UTC m=+0.103646985 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid)
Oct 10 10:22:09 compute-1 podman[248786]: 2025-10-10 10:22:09.014384407 +0000 UTC m=+0.098858035 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 10 10:22:09 compute-1 podman[248787]: 2025-10-10 10:22:09.065292239 +0000 UTC m=+0.146288781 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 10 10:22:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:09.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:09 compute-1 ceph-mon[79167]: pgmap v1086: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:22:10 compute-1 nova_compute[235132]: 2025-10-10 10:22:10.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:10.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:11.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:11 compute-1 ceph-mon[79167]: pgmap v1087: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:12 compute-1 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 10 10:22:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:22:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:12.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:22:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:22:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:22:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:22:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:22:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:13.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:22:13 compute-1 ceph-mon[79167]: pgmap v1088: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:13 compute-1 nova_compute[235132]: 2025-10-10 10:22:13.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:14.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:15.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:15 compute-1 ceph-mon[79167]: pgmap v1089: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:22:15 compute-1 nova_compute[235132]: 2025-10-10 10:22:15.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:22:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:22:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:16.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:22:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:22:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:17.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:22:17 compute-1 ceph-mon[79167]: pgmap v1090: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:22:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:22:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:22:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:22:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:22:18 compute-1 nova_compute[235132]: 2025-10-10 10:22:18.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:18.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:22:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:19.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:22:19 compute-1 ceph-mon[79167]: pgmap v1091: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:22:20 compute-1 ceph-mon[79167]: pgmap v1092: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:20 compute-1 nova_compute[235132]: 2025-10-10 10:22:20.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:20.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:20 compute-1 sudo[248855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:22:21 compute-1 sudo[248855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:22:21 compute-1 sudo[248855]: pam_unix(sudo:session): session closed for user root
Oct 10 10:22:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:22:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:21.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:22:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:22:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:22:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:22.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:22:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:22:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:22:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:22:23 compute-1 ceph-mon[79167]: pgmap v1093: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:22:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:23.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:23 compute-1 nova_compute[235132]: 2025-10-10 10:22:23.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:22:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:24.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:22:25 compute-1 ceph-mon[79167]: pgmap v1094: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:22:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:25.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:25 compute-1 nova_compute[235132]: 2025-10-10 10:22:25.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:25 compute-1 podman[248883]: 2025-10-10 10:22:25.970262248 +0000 UTC m=+0.070078757 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:22:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:22:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:26.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:22:26 compute-1 sudo[248902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:22:26 compute-1 sudo[248902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:22:26 compute-1 sudo[248902]: pam_unix(sudo:session): session closed for user root
Oct 10 10:22:26 compute-1 sudo[248927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:22:26 compute-1 sudo[248927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:22:27 compute-1 ceph-mon[79167]: pgmap v1095: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/4103532067' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:22:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/4103532067' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:22:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:27.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:22:27 compute-1 sudo[248927]: pam_unix(sudo:session): session closed for user root
Oct 10 10:22:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:22:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:22:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:22:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:22:28 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:22:28 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:22:28 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:22:28 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:22:28 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:22:28 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:22:28 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:22:28 compute-1 nova_compute[235132]: 2025-10-10 10:22:28.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:22:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:28.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:22:29 compute-1 ceph-mon[79167]: pgmap v1096: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:22:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:29.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0.
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:22:29.461045) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091749461090, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 773, "num_deletes": 250, "total_data_size": 1452613, "memory_usage": 1476256, "flush_reason": "Manual Compaction"}
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091749471472, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 956064, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33562, "largest_seqno": 34330, "table_properties": {"data_size": 952472, "index_size": 1436, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7475, "raw_average_key_size": 17, "raw_value_size": 945175, "raw_average_value_size": 2172, "num_data_blocks": 64, "num_entries": 435, "num_filter_entries": 435, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760091695, "oldest_key_time": 1760091695, "file_creation_time": 1760091749, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 10459 microseconds, and 6351 cpu microseconds.
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:22:29.471507) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 956064 bytes OK
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:22:29.471524) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:22:29.472587) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:22:29.472597) EVENT_LOG_v1 {"time_micros": 1760091749472593, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:22:29.472608) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 1448524, prev total WAL file size 1448524, number of live WAL files 2.
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:22:29.473124) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323531' seq:72057594037927935, type:22 .. '6B7600353032' seq:0, type:0; will stop at (end)
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(933KB)], [63(13MB)]
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091749473189, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 14989262, "oldest_snapshot_seqno": -1}
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 6134 keys, 13815009 bytes, temperature: kUnknown
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091749549702, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 13815009, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13774792, "index_size": 23787, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15365, "raw_key_size": 159179, "raw_average_key_size": 25, "raw_value_size": 13664907, "raw_average_value_size": 2227, "num_data_blocks": 942, "num_entries": 6134, "num_filter_entries": 6134, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760091749, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:22:29.550045) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 13815009 bytes
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:22:29.553736) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.6 rd, 180.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 13.4 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(30.1) write-amplify(14.4) OK, records in: 6646, records dropped: 512 output_compression: NoCompression
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:22:29.553796) EVENT_LOG_v1 {"time_micros": 1760091749553779, "job": 38, "event": "compaction_finished", "compaction_time_micros": 76630, "compaction_time_cpu_micros": 54152, "output_level": 6, "num_output_files": 1, "total_output_size": 13815009, "num_input_records": 6646, "num_output_records": 6134, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091749555047, "job": 38, "event": "table_file_deletion", "file_number": 65}
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091749560015, "job": 38, "event": "table_file_deletion", "file_number": 63}
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:22:29.473009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:22:29.560089) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:22:29.560094) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:22:29.560096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:22:29.560097) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:22:29 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:22:29.560099) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:22:30 compute-1 ceph-mon[79167]: pgmap v1097: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:22:30 compute-1 nova_compute[235132]: 2025-10-10 10:22:30.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:22:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:30.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:22:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:31.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:31 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:22:32 compute-1 sudo[248986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:22:32 compute-1 sudo[248986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:22:32 compute-1 sudo[248986]: pam_unix(sudo:session): session closed for user root
Oct 10 10:22:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:22:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:32.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:22:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:22:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:22:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:22:33 compute-1 ceph-mon[79167]: pgmap v1098: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:22:33 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:22:33 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:22:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:33.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:33 compute-1 nova_compute[235132]: 2025-10-10 10:22:33.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:22:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:34.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:22:35 compute-1 ceph-mon[79167]: pgmap v1099: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:22:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:35.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:35 compute-1 nova_compute[235132]: 2025-10-10 10:22:35.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:22:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:36.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:22:37 compute-1 ceph-mon[79167]: pgmap v1100: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:22:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:22:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:37.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:22:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:22:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:22:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:22:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:22:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:22:38 compute-1 nova_compute[235132]: 2025-10-10 10:22:38.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:22:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:38.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:22:39 compute-1 ceph-mon[79167]: pgmap v1101: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:22:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:39.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:39 compute-1 podman[249016]: 2025-10-10 10:22:39.98923917 +0000 UTC m=+0.078599000 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct 10 10:22:40 compute-1 podman[249017]: 2025-10-10 10:22:40.022299283 +0000 UTC m=+0.109210247 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:22:40 compute-1 podman[249015]: 2025-10-10 10:22:40.02439262 +0000 UTC m=+0.112761304 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 10 10:22:40 compute-1 nova_compute[235132]: 2025-10-10 10:22:40.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:40.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:41 compute-1 sudo[249081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:22:41 compute-1 sudo[249081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:22:41 compute-1 sudo[249081]: pam_unix(sudo:session): session closed for user root
Oct 10 10:22:41 compute-1 ceph-mon[79167]: pgmap v1102: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:22:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:41.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:22:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:22:42.222 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:22:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:22:42.222 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:22:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:22:42.223 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:22:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:22:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:22:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:42.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:22:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:22:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:22:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:22:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:22:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:43.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:43 compute-1 ceph-mon[79167]: pgmap v1103: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:43 compute-1 nova_compute[235132]: 2025-10-10 10:22:43.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:43 compute-1 sshd-session[249108]: Accepted publickey for zuul from 192.168.122.10 port 54974 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 10:22:43 compute-1 systemd-logind[789]: New session 57 of user zuul.
Oct 10 10:22:43 compute-1 systemd[1]: Started Session 57 of User zuul.
Oct 10 10:22:43 compute-1 sshd-session[249108]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 10:22:43 compute-1 sudo[249112]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 10 10:22:43 compute-1 sudo[249112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:22:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:22:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:44.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:22:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:45.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:45 compute-1 ceph-mon[79167]: pgmap v1104: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:22:45 compute-1 nova_compute[235132]: 2025-10-10 10:22:45.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:46 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:22:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:46.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:47.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:47 compute-1 ceph-mon[79167]: pgmap v1105: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:47 compute-1 ceph-mon[79167]: from='client.26149 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:47 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/3034678619' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 10 10:22:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:22:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Oct 10 10:22:47 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1845402693' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 10 10:22:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:22:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:22:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:22:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:22:48 compute-1 ceph-mon[79167]: from='client.25805 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:48 compute-1 ceph-mon[79167]: from='client.16650 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:48 compute-1 ceph-mon[79167]: from='client.26158 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:48 compute-1 ceph-mon[79167]: from='client.25814 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:48 compute-1 ceph-mon[79167]: from='client.16662 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:48 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1845402693' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 10 10:22:48 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1224328138' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 10 10:22:48 compute-1 nova_compute[235132]: 2025-10-10 10:22:48.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 10:22:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:48.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 10:22:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:22:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:49.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:22:49 compute-1 ceph-mon[79167]: pgmap v1106: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:22:50 compute-1 ceph-mon[79167]: pgmap v1107: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:50 compute-1 nova_compute[235132]: 2025-10-10 10:22:50.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:22:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:50.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:22:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:51.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:22:52 compute-1 ceph-mon[79167]: pgmap v1108: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:22:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:52.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:22:52 compute-1 ovs-vsctl[249481]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 10 10:22:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:22:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:22:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:22:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:22:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:22:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:53.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:22:53 compute-1 nova_compute[235132]: 2025-10-10 10:22:53.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:54 compute-1 virtqemud[234629]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 10 10:22:54 compute-1 virtqemud[234629]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 10 10:22:54 compute-1 virtqemud[234629]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 10 10:22:54 compute-1 ceph-mon[79167]: pgmap v1109: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:22:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:54.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:54 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: cache status {prefix=cache status} (starting...)
Oct 10 10:22:54 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 10:22:54 compute-1 lvm[249791]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 10:22:54 compute-1 lvm[249791]: VG ceph_vg0 finished
Oct 10 10:22:55 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: client ls {prefix=client ls} (starting...)
Oct 10 10:22:55 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 10:22:55 compute-1 kernel: block loop3: the capability attribute has been deprecated.
Oct 10 10:22:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:22:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:55.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:22:55 compute-1 ceph-mon[79167]: from='client.26173 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:55 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1672425821' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 10 10:22:55 compute-1 ceph-mon[79167]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 10 10:22:55 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: damage ls {prefix=damage ls} (starting...)
Oct 10 10:22:55 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 10:22:55 compute-1 nova_compute[235132]: 2025-10-10 10:22:55.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:55 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: dump loads {prefix=dump loads} (starting...)
Oct 10 10:22:55 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 10:22:55 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct 10 10:22:55 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/588890842' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 10 10:22:55 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 10 10:22:55 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 10:22:56 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 10 10:22:56 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 10:22:56 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 10 10:22:56 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 10:22:56 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:22:56 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2495586288' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:22:56 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 10 10:22:56 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 10:22:56 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 10 10:22:56 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 10:22:56 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0)
Oct 10 10:22:56 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2037339908' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 10 10:22:56 compute-1 ceph-mon[79167]: from='client.25829 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:56 compute-1 ceph-mon[79167]: from='client.26185 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:56 compute-1 ceph-mon[79167]: pgmap v1110: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:22:56 compute-1 ceph-mon[79167]: from='client.16677 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:56 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1860179327' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:22:56 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1736330349' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 10 10:22:56 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/588890842' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 10 10:22:56 compute-1 ceph-mon[79167]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 10 10:22:56 compute-1 ceph-mon[79167]: from='client.25847 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:56 compute-1 ceph-mon[79167]: from='client.26209 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:56 compute-1 ceph-mon[79167]: from='client.16692 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:56 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/4222220785' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 10 10:22:56 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/661691761' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:22:56 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2495586288' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:22:56 compute-1 ceph-mon[79167]: from='client.25862 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:56 compute-1 ceph-mon[79167]: from='client.26227 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:56 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/3187780686' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 10 10:22:56 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2037339908' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 10 10:22:56 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1469971866' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 10 10:22:56 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 10 10:22:56 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 10:22:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:56.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:56 compute-1 podman[250116]: 2025-10-10 10:22:56.964157101 +0000 UTC m=+0.070492018 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 10 10:22:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Oct 10 10:22:57 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2477553374' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 10 10:22:57 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: ops {prefix=ops} (starting...)
Oct 10 10:22:57 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 10:22:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Oct 10 10:22:57 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2039649700' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 10 10:22:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:57.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:22:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct 10 10:22:57 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2988512648' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 10:22:57 compute-1 ceph-mon[79167]: from='client.16713 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:57 compute-1 ceph-mon[79167]: from='client.25874 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:57 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2388601897' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 10 10:22:57 compute-1 ceph-mon[79167]: from='client.16731 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:57 compute-1 ceph-mon[79167]: from='client.26257 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:57 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2477553374' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 10 10:22:57 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/610000102' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 10 10:22:57 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2039649700' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 10 10:22:57 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1390305313' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 10:22:57 compute-1 ceph-mon[79167]: from='client.26281 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:57 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/4242818798' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 10 10:22:57 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2988512648' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 10:22:57 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/778203399' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 10:22:57 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: session ls {prefix=session ls} (starting...)
Oct 10 10:22:57 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 10:22:57 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: status {prefix=status} (starting...)
Oct 10 10:22:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:22:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:22:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:22:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:22:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:22:58 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct 10 10:22:58 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3421568049' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 10:22:58 compute-1 nova_compute[235132]: 2025-10-10 10:22:58.374 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:22:58 compute-1 nova_compute[235132]: 2025-10-10 10:22:58.375 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:22:58 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct 10 10:22:58 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1035999956' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 10 10:22:58 compute-1 nova_compute[235132]: 2025-10-10 10:22:58.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:22:58 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct 10 10:22:58 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1488282981' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 10:22:58 compute-1 ceph-mon[79167]: from='client.16764 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:58 compute-1 ceph-mon[79167]: from='client.25895 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:58 compute-1 ceph-mon[79167]: pgmap v1111: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:22:58 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2332321882' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 10 10:22:58 compute-1 ceph-mon[79167]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 10 10:22:58 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2036159140' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 10:22:58 compute-1 ceph-mon[79167]: from='client.26314 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:58 compute-1 ceph-mon[79167]: from='client.25916 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:58 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3421568049' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 10:22:58 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/3063974506' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 10:22:58 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1401553685' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 10 10:22:58 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/512450568' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 10:22:58 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1035999956' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 10 10:22:58 compute-1 ceph-mon[79167]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 10 10:22:58 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1328721408' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 10 10:22:58 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1488282981' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 10:22:58 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/832198139' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 10:22:58 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3760174204' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 10:22:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:22:58.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:58 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Oct 10 10:22:58 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3421045328' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 10 10:22:58 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 10 10:22:58 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3233252603' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 10:22:59 compute-1 nova_compute[235132]: 2025-10-10 10:22:59.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:22:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:22:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:22:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:22:59.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:22:59 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Oct 10 10:22:59 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3055579754' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 10 10:22:59 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Oct 10 10:22:59 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3635718541' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 10 10:22:59 compute-1 ceph-mon[79167]: from='client.26362 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:59 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3421045328' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 10 10:22:59 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1758553721' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 10 10:22:59 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3233252603' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 10:22:59 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/4260196095' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 10 10:22:59 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/699035021' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 10 10:22:59 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1647674057' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 10:22:59 compute-1 ceph-mon[79167]: from='client.25967 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:59 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2142363579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:22:59 compute-1 ceph-mon[79167]: from='client.16857 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:22:59 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3055579754' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 10 10:22:59 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/4087547626' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 10:22:59 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1467050544' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 10 10:22:59 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2700707402' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 10 10:22:59 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3635718541' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 10 10:22:59 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct 10 10:22:59 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/571390437' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 10:23:00 compute-1 nova_compute[235132]: 2025-10-10 10:23:00.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:23:00 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Oct 10 10:23:00 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/175764086' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 10 10:23:00 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct 10 10:23:00 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2561366827' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 10:23:00 compute-1 nova_compute[235132]: 2025-10-10 10:23:00.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:00 compute-1 ceph-mon[79167]: pgmap v1112: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:00 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/571390437' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 10:23:00 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/19710570' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 10 10:23:00 compute-1 ceph-mon[79167]: from='client.26431 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:00 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1751714302' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:23:00 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2444127767' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 10:23:00 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3224783318' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 10:23:00 compute-1 ceph-mon[79167]: from='client.26012 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:00 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/175764086' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 10 10:23:00 compute-1 ceph-mon[79167]: from='client.16902 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:00 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3228736852' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 10 10:23:00 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/670445082' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 10:23:00 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2561366827' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 10:23:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:23:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:00.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:23:01 compute-1 nova_compute[235132]: 2025-10-10 10:23:01.039 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:23:01 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct 10 10:23:01 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1569461498' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 10:23:01 compute-1 sudo[250812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:23:01 compute-1 sudo[250812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:23:01 compute-1 sudo[250812]: pam_unix(sudo:session): session closed for user root
Oct 10 10:23:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:01.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:01 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct 10 10:23:01 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2579803735' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80896000 unmapped: 4964352 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:35.186115+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80896000 unmapped: 4964352 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:36.186259+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099027400 session 0x55b09723da40
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096d95c00 session 0x55b0988ae1e0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984329 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80904192 unmapped: 4956160 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:37.186390+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 4939776 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:38.186538+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80920576 unmapped: 4939776 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:39.186713+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 4931584 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:40.186892+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80928768 unmapped: 4931584 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:41.187054+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984329 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 4923392 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 10:23:01 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:42.187280+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80936960 unmapped: 4923392 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:43.187391+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 4915200 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:44.187557+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 4915200 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:45.187737+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80945152 unmapped: 4915200 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:46.187876+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984329 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80953344 unmapped: 4907008 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.927526474s of 13.931305885s, submitted: 1
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:47.188120+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80961536 unmapped: 4898816 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:48.188431+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 4890624 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:49.188761+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 4890624 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:50.189020+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80969728 unmapped: 4890624 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:51.189266+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 984461 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 4882432 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:52.189495+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80977920 unmapped: 4882432 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:53.189754+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b097aa4000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80986112 unmapped: 4874240 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:54.189991+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 4866048 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:55.190253+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 4866048 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:56.190419+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985382 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 80994304 unmapped: 4866048 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:57.190577+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:58.190802+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81018880 unmapped: 4841472 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:50:59.190984+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81018880 unmapped: 4841472 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:00.191159+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81027072 unmapped: 4833280 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:01.191352+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81027072 unmapped: 4833280 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.529447556s of 14.540608406s, submitted: 3
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985250 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:02.191533+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81027072 unmapped: 4833280 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:03.191704+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81035264 unmapped: 4825088 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:04.191906+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81035264 unmapped: 4825088 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:05.192122+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 4816896 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:06.192276+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b0963a4800 session 0x55b0987bf860
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 4816896 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985250 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:07.192492+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 4816896 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:08.192730+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 4800512 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:09.192942+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 4800512 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:10.193097+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81068032 unmapped: 4792320 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:11.193278+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81068032 unmapped: 4792320 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985250 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:12.193504+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81076224 unmapped: 4784128 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:13.193686+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81076224 unmapped: 4784128 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:14.193874+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81076224 unmapped: 4784128 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:15.194131+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81084416 unmapped: 4775936 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:16.194307+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81084416 unmapped: 4775936 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 985250 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0963a4800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.334420204s of 15.342825890s, submitted: 1
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:17.194516+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 4767744 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:18.194669+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 4767744 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:19.194909+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81092608 unmapped: 4767744 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:20.195069+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81100800 unmapped: 4759552 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:21.195227+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81100800 unmapped: 4759552 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986894 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:22.195388+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81100800 unmapped: 4759552 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096d95c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:23.195532+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81108992 unmapped: 4751360 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:24.195689+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81108992 unmapped: 4751360 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:25.195818+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81117184 unmapped: 4743168 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:26.195965+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81117184 unmapped: 4743168 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987815 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:27.196151+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81117184 unmapped: 4743168 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:28.196345+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 4734976 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:29.196561+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 4734976 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:30.196884+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 4726784 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:31.197043+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 4726784 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987815 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:32.197224+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 4710400 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.439384460s of 15.455260277s, submitted: 4
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:33.197370+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 4710400 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:34.197532+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 4710400 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:35.197702+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 4710400 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:36.197895+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 4710400 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987683 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:37.198046+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 4710400 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:38.198198+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 4702208 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:39.198379+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 4702208 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:40.198531+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81166336 unmapped: 4694016 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:41.198663+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81166336 unmapped: 4694016 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987683 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:42.198843+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 4685824 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:43.198995+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 4685824 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:44.199145+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 4677632 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:45.199314+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 4677632 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:46.199501+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81182720 unmapped: 4677632 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987683 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:47.199657+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 4669440 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:48.199798+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 4669440 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:49.199961+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 4661248 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:50.200118+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 4661248 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:51.200254+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 4653056 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987683 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:52.200416+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 4653056 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:53.200552+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 4653056 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:54.200743+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 4644864 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:55.200998+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 4644864 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:56.201171+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 4636672 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987683 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:57.201369+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 4636672 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:58.201492+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81223680 unmapped: 4636672 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:51:59.201715+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 4628480 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:00.201908+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 4628480 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:01.202060+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81231872 unmapped: 4628480 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987683 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:02.202212+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 4620288 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:03.202407+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81240064 unmapped: 4620288 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:04.202669+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 4612096 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:05.202961+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81248256 unmapped: 4612096 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:06.203152+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81256448 unmapped: 4603904 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987683 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:07.203410+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81256448 unmapped: 4603904 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:08.203616+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81256448 unmapped: 4603904 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:09.203969+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81264640 unmapped: 4595712 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:10.204186+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81264640 unmapped: 4595712 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:11.204435+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 4587520 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987683 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:12.204611+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 4587520 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:13.204810+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81281024 unmapped: 4579328 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:14.204961+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81281024 unmapped: 4579328 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:15.205144+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81281024 unmapped: 4579328 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:16.205378+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 4571136 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987683 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:17.206452+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 4571136 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:18.206581+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 4571136 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:19.207273+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 4562944 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:20.208373+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 4554752 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:21.208977+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 4554752 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987683 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:22.209246+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81305600 unmapped: 4554752 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:23.209614+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 4546560 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:24.209859+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 4546560 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:25.210003+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81313792 unmapped: 4546560 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:26.210225+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 4538368 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:27.210448+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987683 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81321984 unmapped: 4538368 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:28.210608+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 4530176 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:29.210798+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81330176 unmapped: 4530176 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:30.210953+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 4521984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:31.211133+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 4521984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096d95c00 session 0x55b0991df680
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b0963a4800 session 0x55b09722ed20
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:32.211443+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987683 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 4521984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:33.211664+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81338368 unmapped: 4521984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:34.211869+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 4513792 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:35.212067+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81346560 unmapped: 4513792 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:36.212265+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 4505600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:37.212438+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987683 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81354752 unmapped: 4505600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:38.212605+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 4497408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:39.212849+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81362944 unmapped: 4497408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:40.213023+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 4489216 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:41.213190+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 4489216 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:42.213401+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987683 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81371136 unmapped: 4489216 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096dca000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 70.248580933s of 70.251838684s, submitted: 1
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:43.213580+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 4481024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:44.213719+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 4481024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:45.213908+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 4472832 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:46.214084+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 4472832 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:47.214255+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987815 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81387520 unmapped: 4472832 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:48.214445+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 4464640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:49.214654+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81395712 unmapped: 4464640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:50.214798+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 4456448 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:51.214972+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81403904 unmapped: 4456448 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:52.215146+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987224 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 4448256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:53.215299+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81412096 unmapped: 4448256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:54.215554+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 4440064 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.062088966s of 12.070409775s, submitted: 2
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:55.215702+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 4440064 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:56.215826+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 4440064 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:57.215966+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986501 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 4431872 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:58.216085+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 4431872 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:52:59.216240+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 4423680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:00.216410+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 4423680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:01.216573+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 4415488 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:02.216693+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986501 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 4415488 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:03.216869+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 4415488 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:04.217016+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 4407296 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:05.217136+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81453056 unmapped: 4407296 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:06.217263+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 4399104 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:07.217388+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986501 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81461248 unmapped: 4399104 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:08.217556+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 4390912 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:09.217780+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 4390912 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:10.217958+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 4390912 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:11.218092+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 4382720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:12.218260+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986501 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81477632 unmapped: 4382720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:13.218424+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 4374528 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:14.218581+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81485824 unmapped: 4374528 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:15.218803+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 4366336 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:16.218985+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 4366336 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:17.219208+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986501 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 4366336 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:18.219407+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 4358144 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:19.219610+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 4358144 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:20.219821+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 4358144 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:21.219996+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81510400 unmapped: 4349952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:22.220161+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986501 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81510400 unmapped: 4349952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:23.220386+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 4341760 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:24.220547+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 4341760 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:25.220720+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 4333568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:26.220884+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 4333568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:27.221072+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986501 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 4325376 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:28.221292+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 4325376 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:29.221575+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 4325376 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:30.221739+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 4325376 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:31.221918+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 4317184 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:32.222066+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986501 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81543168 unmapped: 4317184 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:33.222275+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 4308992 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:34.222433+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 4308992 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:35.222752+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81559552 unmapped: 4300800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:36.222960+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81559552 unmapped: 4300800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:37.223130+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986501 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 4292608 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:38.223401+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 4292608 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:39.223708+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81567744 unmapped: 4292608 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:40.223913+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 4284416 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:41.224064+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026400 session 0x55b09722f0e0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026000 session 0x55b09840c1e0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81575936 unmapped: 4284416 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:42.224300+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986501 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 4276224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:43.225141+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 4276224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:44.225370+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81584128 unmapped: 4276224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:45.225503+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 4268032 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:46.225661+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 4268032 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:47.225845+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986501 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 4268032 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:48.226046+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 4259840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:49.226368+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 4259840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:50.226510+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 4251648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:51.226647+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 4251648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:52.226955+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 57.489089966s of 57.500850677s, submitted: 2
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b097aa4000 session 0x55b096d5fa40
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099027000 session 0x55b0988cde00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 986633 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 4243456 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:53.229389+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81625088 unmapped: 4235264 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:54.229555+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81625088 unmapped: 4235264 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:55.231030+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 4227072 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:56.231207+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 4227072 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:57.231824+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988145 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 4218880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:58.232212+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 4218880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:53:59.232531+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81641472 unmapped: 4218880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:00.232669+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81649664 unmapped: 4210688 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:01.232807+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81649664 unmapped: 4210688 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:02.232976+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988145 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81657856 unmapped: 4202496 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:03.233189+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0963a4800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.029865265s of 11.068701744s, submitted: 2
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81674240 unmapped: 4186112 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:04.233511+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 4177920 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:05.233644+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 4177920 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:06.233814+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81690624 unmapped: 4169728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:07.234101+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988277 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81690624 unmapped: 4169728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:08.234262+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81690624 unmapped: 4169728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:09.234950+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096d95c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 4136960 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:10.235140+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81723392 unmapped: 4136960 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:11.235319+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 4128768 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:12.235554+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989657 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 4128768 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:13.235726+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 4120576 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:14.235875+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 4120576 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:15.236006+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.139680862s of 12.150322914s, submitted: 3
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 4120576 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:16.236170+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 4112384 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:17.236383+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988934 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 4177920 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:18.236528+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81690624 unmapped: 4169728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:19.236688+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81690624 unmapped: 4169728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:20.236901+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81698816 unmapped: 4161536 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:21.237066+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81698816 unmapped: 4161536 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:22.237234+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988934 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81698816 unmapped: 4161536 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:23.237463+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81707008 unmapped: 4153344 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:24.237613+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 8302 writes, 34K keys, 8302 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 8302 writes, 1698 syncs, 4.89 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8302 writes, 34K keys, 8302 commit groups, 1.0 writes per commit group, ingest: 21.40 MB, 0.04 MB/s
                                           Interval WAL: 8302 writes, 1698 syncs, 4.89 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 4096000 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:25.237773+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 4096000 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:26.237934+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 4087808 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:27.238103+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988934 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 4087808 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:28.238304+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 4079616 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:29.238532+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 4079616 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:30.238881+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 4071424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:31.239403+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 4071424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:32.239545+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988934 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 4063232 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:33.239695+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 4063232 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:34.239864+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 4063232 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:35.240000+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 4055040 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:36.240149+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 4055040 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:37.240358+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988934 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81813504 unmapped: 4046848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:38.240561+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81813504 unmapped: 4046848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:39.240791+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81813504 unmapped: 4046848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:40.241008+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:41.241173+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 4038656 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:42.241310+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 4038656 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988934 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:43.241486+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 4030464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:44.241694+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 4030464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:45.241845+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 4030464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:46.241921+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 4022272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:47.242057+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 4022272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988934 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:48.242172+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 4014080 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:49.242341+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 4014080 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:50.242502+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 4014080 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:51.242658+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 4005888 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:52.242830+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 4005888 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988934 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:53.242952+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3997696 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:54.243109+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 3997696 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:55.243279+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3989504 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:56.243425+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 3989504 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:57.243562+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3981312 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988934 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:58.244242+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3981312 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:54:59.244457+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 3981312 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:00.244601+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 3973120 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:01.244774+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 3973120 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:02.244938+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 3973120 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988934 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:03.245107+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 3964928 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:04.245298+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 3964928 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:05.245494+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 3956736 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:06.245669+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 3956736 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:07.245831+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 3948544 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988934 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:08.246030+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 3948544 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:09.246261+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 3948544 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:10.246407+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 3940352 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:11.246575+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 3940352 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:12.246753+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 3932160 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988934 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:13.246955+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 3932160 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:14.247103+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 3932160 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:15.247302+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 3923968 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:16.247508+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 3923968 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:17.247689+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 3915776 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988934 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:18.247863+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 3915776 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:19.248093+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 3907584 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:20.248264+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 3907584 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:21.248443+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81952768 unmapped: 3907584 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:22.248596+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 3899392 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988934 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:23.248754+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 3899392 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:24.248943+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 3891200 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:25.249080+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 3891200 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:26.249252+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 3891200 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b0963a4800 session 0x55b0988aed20
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:27.249427+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 3883008 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988934 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:28.249592+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 3883008 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:29.249831+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 3874816 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:30.249988+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 3874816 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:31.250155+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 3866624 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:32.250308+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 3866624 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 988934 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:33.250502+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 3858432 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:34.250745+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 3858432 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:35.250964+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 3858432 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:36.251161+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 3850240 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:37.251551+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 81.893539429s of 81.902740479s, submitted: 2
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 3850240 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989066 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:38.251732+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 3850240 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:39.251947+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 3842048 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:40.252097+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 3842048 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:41.252269+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 3833856 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:42.252500+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 3833856 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990578 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:43.252680+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 3825664 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:44.252836+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 3825664 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:45.252956+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 3825664 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:46.253125+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 3817472 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:47.253304+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 3817472 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989987 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:48.253473+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 3809280 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:49.253734+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 3809280 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:50.253877+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 3809280 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:51.254039+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 3801088 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:52.254179+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 3792896 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989987 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:53.254388+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.926835060s of 15.939207077s, submitted: 3
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 3784704 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:54.254541+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 3784704 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:55.254706+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 3776512 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:56.254825+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 3776512 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:57.254931+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 3768320 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989855 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:58.255061+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 3768320 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:55:59.255202+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 3760128 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:00.255384+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 3760128 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:01.255507+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 3760128 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:02.255614+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 3751936 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989855 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:03.255728+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 3751936 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:04.255844+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 3751936 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:05.256004+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 3743744 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:06.256167+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 3743744 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:07.256295+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 3735552 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989855 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:08.256381+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 3735552 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:09.256561+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 3735552 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:10.256726+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 3727360 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:11.256859+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 3727360 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:12.256990+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 3719168 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989855 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:13.257144+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 3719168 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:14.257383+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 3710976 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096d95c00 session 0x55b098e263c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099027400 session 0x55b096bfb0e0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:15.257501+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 3710976 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:16.257681+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 3702784 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:17.257872+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 3694592 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:18.258033+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989855 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 3694592 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:19.258281+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 3686400 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:20.258463+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 3686400 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:21.258591+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 3678208 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:22.258705+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 3678208 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:23.258890+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989855 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 3678208 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:24.259112+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 3670016 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:25.259287+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.095127106s of 32.099720001s, submitted: 1
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 3670016 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:26.259426+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 3670016 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:27.259574+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 3661824 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:28.259729+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989987 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc9f8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x2fdf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b097aa5c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 3661824 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:29.260029+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 3661824 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:30.260204+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 3661824 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:31.260336+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b097aa4400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 3661824 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:32.260449+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 3661824 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:33.260617+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991499 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 3661824 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:34.260755+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [1,1])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 3620864 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:35.260926+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.123404503s of 10.003307343s, submitted: 325
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 3497984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:36.261091+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:37.261571+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:38.261853+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990908 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:39.262069+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:40.262302+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:41.262511+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:42.262631+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:43.262743+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990776 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026400 session 0x55b0986803c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b097aa4400 session 0x55b0987bd680
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:44.262900+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:45.263029+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:46.263179+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:47.263308+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:48.263495+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990776 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:49.263694+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:50.263837+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:51.264001+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:52.264187+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:53.264345+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990776 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:54.264528+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0963a4800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.957939148s of 19.074586868s, submitted: 41
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:55.264685+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:56.264876+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:57.265010+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:58.265188+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990908 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:56:59.265373+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:00.265609+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:01.265797+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:02.265939+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:03.266092+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990908 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:04.266289+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:05.266462+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:06.266672+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:07.266817+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:08.266984+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990317 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:09.267807+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:10.267994+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.904156685s of 15.913485527s, submitted: 2
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:11.268172+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:12.268395+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:13.268571+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:14.268862+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:15.269227+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:16.269446+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:17.269828+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:18.270031+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:19.270300+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:20.270441+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:21.270611+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:22.270759+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:23.270961+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:24.271398+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:25.271709+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:26.271901+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:27.272075+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:28.272288+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:29.272573+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:30.272775+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:31.272909+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:32.273097+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:33.273255+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:34.273408+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:35.273596+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:36.273789+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:37.273940+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:38.274101+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:39.274301+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:40.274490+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:41.274629+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:42.274790+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:43.274969+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:44.275132+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:45.275271+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:46.275375+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:47.275497+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:48.275646+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:49.275848+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:50.275993+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:51.276140+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:52.276270+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:53.276396+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:54.276510+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:55.276636+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:56.276755+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:57.276884+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:58.277035+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:57:59.277201+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:00.277314+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:01.277474+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:02.277597+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:03.277740+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:04.277886+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:05.278053+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:06.278204+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:07.278380+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:08.278543+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:09.278736+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:10.278876+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:11.279052+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:12.279261+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:13.279403+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:14.279553+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:15.279670+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:16.279816+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:17.279941+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:18.280131+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:19.280383+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:20.280524+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:21.280641+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:22.280798+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:23.280925+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:24.281094+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:25.281264+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:26.281386+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:27.281536+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:28.281703+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:29.281877+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:30.281988+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:31.282127+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:32.282301+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:33.282607+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:34.282802+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:35.283053+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:36.283204+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:37.283384+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:38.283588+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:39.283739+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:40.283883+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:41.283998+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:42.284168+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:43.284389+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:44.284936+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:45.285117+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:46.285282+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:47.285568+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:48.285695+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:49.286625+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:50.286740+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:51.286945+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:52.287416+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:53.287560+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:54.287691+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:55.287827+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:56.287947+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:57.288093+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:58.288243+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:59.288451+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:00.288582+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:01.288740+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:02.289043+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:03.289252+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:04.289459+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:05.289654+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:06.289775+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:07.289949+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:08.290102+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:09.290274+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:10.290499+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:11.290651+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:12.290820+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:13.290997+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:14.291156+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:15.291363+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:16.291500+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:17.291616+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:18.291752+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:19.291945+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:20.292082+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:21.292223+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:22.292393+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:23.292500+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:24.292616+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:25.292801+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:26.292975+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:27.293116+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:28.293235+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:29.293402+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:30.293542+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:31.293670+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:32.293837+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:33.293985+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:34.294119+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:35.294414+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:36.294560+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:37.294831+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:38.295000+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:39.295209+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:40.295491+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:41.295653+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:42.295796+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:43.295995+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:44.296136+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:45.296364+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b0963a4800 session 0x55b098fe1a40
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:46.296542+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 3465216 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:47.296699+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 3465216 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:48.296859+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 3465216 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:49.297291+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:50.297438+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:51.297598+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:52.297742+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:53.297881+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:54.298046+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:55.298152+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:56.298261+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096d95c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 166.030792236s of 166.034805298s, submitted: 1
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b097aa5c00 session 0x55b0987bf4a0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026000 session 0x55b096d5eb40
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:57.298392+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:58.298559+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990317 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:59.298799+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:00.298952+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:01.299096+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:02.299274+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:03.299431+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991829 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:04.299553+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:05.299715+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:06.299869+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:07.300717+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.242959023s of 11.250681877s, submitted: 2
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:08.300836+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991961 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:09.300980+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:10.301123+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:11.301368+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:12.301544+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:13.301722+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096657800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993473 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:14.301899+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:15.302061+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:16.302200+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:17.302348+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:18.302536+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993341 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:19.302922+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.104929924s of 12.169629097s, submitted: 3
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:20.303062+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:21.303694+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:22.303849+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:23.304371+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992618 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:24.304500+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:25.304643+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:26.305013+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:27.305755+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:28.305864+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096657800 session 0x55b099008780
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099027400 session 0x55b09900a1e0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992618 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:29.306208+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:30.306385+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:31.307012+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:32.307157+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:33.307310+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992618 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:34.307485+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026800 session 0x55b096c4b0e0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096dca000 session 0x55b097a14f00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:35.308308+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:36.308558+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:37.308703+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:38.308873+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992618 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:39.309076+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0963a4800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.395177841s of 19.406061172s, submitted: 3
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:40.309286+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:41.309584+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:42.309806+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:43.309965+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:44.310124+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992750 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:45.310248+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b097aa5c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:46.310398+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:47.310505+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:48.310641+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:49.310837+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992882 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:50.311005+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:51.311154+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:52.311384+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.400293350s of 13.415930748s, submitted: 2
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:53.311540+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:54.311681+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992750 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:55.311821+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:56.311986+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:57.312411+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:58.312566+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b0963a4800 session 0x55b09900ab40
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:59.312765+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992750 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:00.312913+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:01.313086+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:02.313404+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.019653320s of 10.023086548s, submitted: 1
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:03.313557+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:04.313759+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992618 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:05.313878+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:06.314009+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:07.314163+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:08.314304+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:09.314543+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992618 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:10.314765+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:11.314957+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:12.315099+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:13.315299+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:14.315548+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992750 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:15.315682+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.912096024s of 12.919400215s, submitted: 2
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:16.315817+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:17.315956+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:18.316076+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:19.316294+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994262 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:20.316434+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:21.316549+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:22.316724+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:23.316851+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:24.317195+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994130 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 3416064 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:25.317342+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 3416064 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:26.317583+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 3416064 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:27.318469+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:28.318653+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:29.319435+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994130 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096dcbc00 session 0x55b09586e3c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0963a4800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:30.319766+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:31.319953+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:32.320223+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:33.320430+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:34.320578+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994130 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:35.320718+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:36.320876+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:37.321015+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:38.321190+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:39.321432+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994130 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:40.321669+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026400 session 0x55b09900b680
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026000 session 0x55b09900b0e0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:41.321817+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:42.322150+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:43.322371+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:44.322566+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994130 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:45.322700+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:46.322833+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:47.323021+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 3383296 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:48.323143+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 3383296 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:49.323301+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994130 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 3375104 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:50.323462+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 3375104 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:51.323636+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096dca000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.915779114s of 35.922908783s, submitted: 2
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3366912 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:52.323852+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3366912 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:53.324005+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3366912 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:54.324148+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994262 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3366912 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:55.324533+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3366912 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:56.324678+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3366912 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:57.324819+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:58.325047+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:59.325272+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995774 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:00.325434+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:01.325578+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:02.325749+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:03.325886+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.083123207s of 12.089940071s, submitted: 2
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:04.326023+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995183 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:05.326155+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:06.326373+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:07.326538+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82518016 unmapped: 3342336 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:08.326661+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82518016 unmapped: 3342336 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:09.327140+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995051 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82518016 unmapped: 3342336 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:10.327411+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 3334144 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:11.327581+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 3334144 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:12.327801+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 3334144 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:13.327941+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 3334144 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:14.328123+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995051 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 3334144 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:15.328247+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 3334144 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:16.328434+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:17.328594+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:18.328795+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:19.328955+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995051 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:20.329167+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:21.329397+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:22.329526+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:23.329660+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:24.329847+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995051 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:25.329987+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:26.330189+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:27.330373+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:28.330599+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:29.330814+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995051 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:30.330963+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:31.331179+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:32.331347+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:33.331504+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:34.331693+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b097aa5c00 session 0x55b0990090e0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995051 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:35.331847+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:36.332002+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:37.332221+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:38.332387+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:39.332702+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995051 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:40.332858+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:41.332996+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:42.333274+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:43.333432+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:44.333601+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995051 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:45.333734+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 41.845153809s of 41.853366852s, submitted: 2
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:46.334056+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 3301376 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:47.334229+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 3301376 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b098269000 session 0x55b097a15e00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099068000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:48.334380+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:49.334639+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995183 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:50.334818+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026800 session 0x55b0987bf4a0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096dca000 session 0x55b098e19a40
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:51.334963+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096dca000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:52.335144+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:53.335314+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:54.335535+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996695 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:55.335719+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:56.335904+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:57.336083+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:58.336250+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:59.336421+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996695 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:00.336613+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.253569603s of 15.263068199s, submitted: 2
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:01.336728+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b097aa5c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:02.336821+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:03.337018+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:04.337186+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998207 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:05.337351+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:06.337533+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:07.337640+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:08.337783+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 3260416 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:09.337926+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998207 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:10.338106+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:11.338299+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:12.338431+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:13.338565+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:14.338726+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997616 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:15.338818+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:16.338976+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.775625229s of 15.790586472s, submitted: 4
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:17.339143+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:18.339295+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:19.339495+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997484 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:20.339667+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:21.339842+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:22.340026+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:23.340214+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:24.340352+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997484 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:25.340526+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099027000 session 0x55b0988cda40
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096d95c00 session 0x55b0987be5a0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:26.340666+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:27.340847+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3235840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:28.340941+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3235840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:29.341131+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3235840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997484 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:30.341297+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3235840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:31.341387+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3235840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:32.341532+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3235840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:33.341677+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3235840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:34.341847+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3235840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997484 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:35.342411+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:36.342826+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.357236862s of 20.361238480s, submitted: 1
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:37.342960+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:38.343129+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:39.344580+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997616 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:40.344741+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:41.344932+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:42.345097+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:43.345256+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:44.345581+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000640 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:45.345705+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:46.345846+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:47.346463+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 3211264 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:48.346633+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 3211264 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:49.346829+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 3211264 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000049 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:50.347030+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 3211264 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:51.347200+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.517070770s of 14.537599564s, submitted: 4
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3203072 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:52.347372+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3203072 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:53.347530+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3203072 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:54.347679+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3203072 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999917 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:55.347821+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3203072 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:56.347958+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:57.348159+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:58.348344+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026400 session 0x55b09905d2c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b097aa5c00 session 0x55b096bfb0e0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096dca000 session 0x55b098e285a0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099027400 session 0x55b098e29860
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:59.348524+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999917 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:00.348676+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:01.348825+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:02.348987+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:03.349118+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:04.349259+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999917 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:05.349464+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:06.349627+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:07.350909+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 3178496 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:08.351096+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 3178496 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:09.351314+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096d95c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.958301544s of 17.961801529s, submitted: 1
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b097aa5c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 3162112 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000181 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:10.351459+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 3162112 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:11.351593+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 3162112 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:12.351738+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 3162112 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:13.351869+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 3162112 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:14.352009+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 3162112 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000181 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:15.352137+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 3153920 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread fragmentation_score=0.000030 took=0.000038s
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:16.352270+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 3153920 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:17.352432+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 3153920 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:18.353030+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 3153920 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:19.353357+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3145728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002614 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:20.353760+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3145728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:21.354045+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.060987473s of 12.085634232s, submitted: 5
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3145728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:22.354176+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3145728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:23.354311+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3145728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:24.354546+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3145728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 9078 writes, 35K keys, 9078 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 9078 writes, 2064 syncs, 4.40 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 776 writes, 1221 keys, 776 commit groups, 1.0 writes per commit group, ingest: 0.40 MB, 0.00 MB/s
                                           Interval WAL: 776 writes, 366 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002023 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:25.354703+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3145728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:26.354860+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 3137536 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:27.354993+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:28.355139+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:29.355382+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:30.355519+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:31.355690+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:32.355827+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:33.356021+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:34.356216+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:35.356394+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:36.356556+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:37.356719+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:38.356920+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3104768 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:39.357150+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3104768 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:40.357312+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3104768 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:41.357991+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3104768 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:42.358141+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3104768 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:43.358282+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 3096576 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:44.358429+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 3096576 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:45.358902+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 3096576 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:46.359011+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 3096576 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:47.359145+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:48.359270+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:49.359385+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:50.359557+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:51.359755+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:52.359879+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:53.360063+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:54.360182+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:55.360399+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:56.360545+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:57.360760+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:58.360993+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:59.361313+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:00.361557+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:01.361946+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:02.362265+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:03.362457+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:04.362581+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:05.362755+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:06.362883+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:07.363010+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 3063808 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:08.363123+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 3063808 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:09.363279+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 3063808 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:10.363434+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 3063808 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:11.363546+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 3055616 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:12.363729+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 3055616 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:13.363908+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 3055616 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:14.364230+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:15.364418+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:16.364635+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:17.364824+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:18.365014+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:19.365245+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:20.365544+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:21.365726+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:22.365959+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:23.366138+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:24.366434+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:25.366634+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:26.366848+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:27.367055+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 3031040 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:28.367307+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 3031040 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:29.367677+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 3031040 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:30.367929+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:31.368148+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:32.368464+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:33.368644+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:34.368811+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:35.369030+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:36.369221+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:37.369486+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:38.369726+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:39.370006+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:40.370266+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:41.370517+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:42.370739+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:43.370939+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:44.371209+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:45.371491+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:46.371704+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:47.371902+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:48.372141+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099027000 session 0x55b0988dd680
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b097aa5c00 session 0x55b097a69e00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:49.372405+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:50.372605+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:51.372825+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:52.373028+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:53.373235+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:54.373505+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:55.373763+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:56.373997+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:57.374198+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:58.374458+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:59.374761+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099069c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 97.974418640s of 97.984451294s, submitted: 3
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:00.374965+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001891 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:01.375136+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:02.375466+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:03.375762+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:04.376081+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:05.376229+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099069800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006427 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:06.376414+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:07.376607+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:08.376835+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:09.377080+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:10.377381+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005836 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:11.377625+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.054930687s of 12.073850632s, submitted: 5
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:12.377822+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:13.378002+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:14.378234+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:15.378475+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005113 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:16.378715+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026400 session 0x55b0994fde00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096d95c00 session 0x55b09900ab40
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:17.378927+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:18.379577+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:19.380130+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:20.380558+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005113 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:21.380730+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:22.381177+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 2990080 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:23.381488+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 2990080 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:24.381651+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 2990080 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:25.381872+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005113 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 2990080 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:26.382048+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 2990080 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:27.382238+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096d95c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.182666779s of 16.189365387s, submitted: 2
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 2990080 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:28.382526+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 2990080 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:29.382825+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:30.383045+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 2981888 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005245 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:31.383594+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 2981888 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:32.383870+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 2981888 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099069800 session 0x55b098fe4000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099069c00 session 0x55b098f794a0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:33.384286+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 2981888 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b097aa5c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:34.384670+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 2981888 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:35.384802+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 2670592 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005245 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:36.385008+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:37.385234+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:38.385420+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:39.385622+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.918289185s of 12.100981712s, submitted: 367
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:40.385817+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004063 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:41.386008+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:42.386182+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:43.386430+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:44.386616+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:45.386823+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004063 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:46.387018+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:47.387179+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026000 session 0x55b098fe1680
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026800 session 0x55b09905d0e0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:48.387423+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:49.387671+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:50.387895+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004063 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:51.388138+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:52.388403+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:53.388568+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:54.388810+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:55.389027+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004063 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:56.389246+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:57.389439+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:58.389628+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026400 session 0x55b09905c780
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.894775391s of 18.905117035s, submitted: 3
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:59.389834+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:00.390695+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004063 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:01.390913+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:02.391153+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:03.391393+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:04.391588+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:05.391779+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005575 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:06.392090+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:07.392387+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:08.392685+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:09.392991+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.683311462s of 10.696245193s, submitted: 3
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:10.393283+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005707 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:11.393584+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:12.393822+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:13.394092+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:14.394438+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:15.394745+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099069800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008599 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:16.394963+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:17.395171+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:18.395304+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:19.395506+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:20.395697+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008599 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:21.395863+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.102847099s of 12.118186951s, submitted: 4
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:22.395999+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:23.396189+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:24.396383+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:25.396538+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:26.396679+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:27.396802+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:28.397007+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:29.397239+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:30.397418+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:31.397591+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:32.397750+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:33.397893+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:34.398087+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:35.398236+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:36.398395+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:37.398540+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:38.398699+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:39.398945+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:40.399119+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:41.399348+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:42.399483+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:43.399682+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:44.399912+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:45.400063+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:46.400289+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:47.400453+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:48.400641+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:49.400919+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:50.401153+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:51.401465+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:52.401717+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:53.401888+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:54.402065+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:55.402242+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:56.402407+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:57.402580+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:58.402725+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:59.402895+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:00.403071+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:01.403252+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:02.403397+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:03.403594+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:04.403833+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:05.404043+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:06.404201+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:07.404425+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:08.404665+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:09.404926+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:10.405128+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:11.405469+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:12.405639+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:13.405990+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:14.406183+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:15.406445+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:16.406637+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:17.406822+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:18.407053+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:19.407277+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:20.407440+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:21.407587+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:22.407762+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099069800 session 0x55b0986805a0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026800 session 0x55b09840d2c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:23.408010+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:24.408279+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:25.408531+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:26.408785+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:27.409002+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:28.409161+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:29.409379+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:30.409528+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:31.409714+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:32.409858+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:33.410038+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099069c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 71.958114624s of 71.965682983s, submitted: 2
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:34.410224+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:35.410412+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008008 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:36.410606+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:37.410827+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:38.411013+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:39.411269+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:40.411489+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008008 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:41.411701+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:42.411925+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:43.412129+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:44.412296+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:45.412472+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.132945061s of 12.140886307s, submitted: 2
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006826 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:46.412663+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:47.412824+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:48.413004+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:49.413231+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:50.413364+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b097aa5c00 session 0x55b098f8a3c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096d95c00 session 0x55b0988dd680
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006694 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:51.413483+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:52.413696+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:53.413946+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:54.414207+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:55.414420+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006694 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:56.414581+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:57.414724+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:58.414833+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:59.415038+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:00.415182+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006694 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:01.415359+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.335838318s of 16.343191147s, submitted: 2
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:02.415528+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:03.415729+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:04.415902+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:05.416097+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006826 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:06.416310+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:07.416528+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099069400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:08.416711+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2465792 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:09.416883+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2514944 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:10.417046+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2514944 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:11.417191+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008338 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2514944 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:12.417439+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2514944 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:13.417595+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2514944 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:14.417745+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2514944 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:15.417882+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2514944 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:16.418029+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008338 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2514944 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:17.418303+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.721952438s of 15.729380608s, submitted: 2
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2498560 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:18.418575+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2498560 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:19.418767+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2498560 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:20.418904+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099069000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2498560 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099069400 session 0x55b098fe0f00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099027400 session 0x55b098f9fc20
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _renew_subs
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:21.419061+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011972 data_alloc: 218103808 data_used: 282624
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2498560 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:22.419261+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 18112512 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 145 ms_handle_reset con 0x55b099069000 session 0x55b0988aeb40
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:23.419403+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099069400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _renew_subs
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 18096128 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:24.419576+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 86663168 unmapped: 15982592 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:25.419768+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 148 ms_handle_reset con 0x55b099069400 session 0x55b098f8be00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:26.419977+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080007 data_alloc: 218103808 data_used: 290816
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:27.420144+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd7000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:28.420348+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:29.420537+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:30.420753+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:31.420916+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080007 data_alloc: 218103808 data_used: 290816
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096d95c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.096752167s of 14.256991386s, submitted: 46
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd7000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:32.421101+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:33.421291+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:34.421469+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:35.421667+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:36.421834+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080643 data_alloc: 218103808 data_used: 290816
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:37.422051+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b097aa5c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:38.422188+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:39.422432+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:40.422636+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:41.422915+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080643 data_alloc: 218103808 data_used: 290816
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:42.423074+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:43.423389+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.083035469s of 12.092510223s, submitted: 2
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:44.423586+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:45.423809+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:46.424004+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080052 data_alloc: 218103808 data_used: 290816
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:47.424183+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:48.424389+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:49.424644+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:50.424853+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:51.425019+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079920 data_alloc: 218103808 data_used: 290816
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:52.425197+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:53.425429+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:54.425618+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:55.425813+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:56.425980+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079920 data_alloc: 218103808 data_used: 290816
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:57.426150+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:58.426296+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:59.426503+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:00.426661+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:01.426871+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079920 data_alloc: 218103808 data_used: 290816
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:02.427127+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 148 ms_handle_reset con 0x55b099026800 session 0x55b09900bc20
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099069800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 148 ms_handle_reset con 0x55b099069800 session 0x55b0990083c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:03.427277+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 148 ms_handle_reset con 0x55b099026800 session 0x55b098e26000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 148 ms_handle_reset con 0x55b099026000 session 0x55b099433680
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 148 ms_handle_reset con 0x55b099026400 session 0x55b09722fa40
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:04.427444+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 148 ms_handle_reset con 0x55b099027400 session 0x55b098856b40
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 93822976 unmapped: 8822784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:05.427570+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099069000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 148 ms_handle_reset con 0x55b099069000 session 0x55b096c4a3c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _renew_subs
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.802989960s of 21.816146851s, submitted: 2
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 93822976 unmapped: 8822784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:06.427759+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104862 data_alloc: 218103808 data_used: 7106560
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _renew_subs
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 150 ms_handle_reset con 0x55b099026000 session 0x55b098fe0960
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 94945280 unmapped: 11378688 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 150 ms_handle_reset con 0x55b099026400 session 0x55b0982841e0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:07.427932+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 150 ms_handle_reset con 0x55b099026800 session 0x55b098e19860
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 150 ms_handle_reset con 0x55b099027400 session 0x55b0988cd860
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099069000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 150 ms_handle_reset con 0x55b099069000 session 0x55b098f8b2c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fbdd3000/0x0/0x4ffc00000, data 0x974379/0xa38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95068160 unmapped: 11255808 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:08.428130+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb81c000/0x0/0x4ffc00000, data 0xf294c4/0xfee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10387456 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:09.428429+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10387456 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:10.428598+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10387456 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:11.428803+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155058 data_alloc: 218103808 data_used: 7106560
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10387456 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:12.428986+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb81a000/0x0/0x4ffc00000, data 0xf2b496/0xff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10387456 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:13.429165+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10387456 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:14.429439+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b096e1c960
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10387456 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb81a000/0x0/0x4ffc00000, data 0xf2b496/0xff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:15.429566+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10387456 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:16.429699+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156283 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 96575488 unmapped: 9748480 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:17.429849+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:18.429989+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:19.430135+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:20.430244+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb81b000/0x0/0x4ffc00000, data 0xf2b496/0xff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:21.430418+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188963 data_alloc: 218103808 data_used: 7876608
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:22.430606+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:23.430776+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.784969330s of 17.929061890s, submitted: 52
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:24.430903+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:25.431042+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb81b000/0x0/0x4ffc00000, data 0xf2b496/0xff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:26.431180+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb81b000/0x0/0x4ffc00000, data 0xf2b496/0xff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188372 data_alloc: 218103808 data_used: 7876608
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:27.431356+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 4276224 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:28.431526+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102105088 unmapped: 4218880 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:29.431736+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102121472 unmapped: 4202496 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:30.431886+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102121472 unmapped: 4202496 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:31.432074+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217534 data_alloc: 218103808 data_used: 8945664
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:32.432221+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:33.432401+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:34.432613+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:35.432772+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:36.432958+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217534 data_alloc: 218103808 data_used: 8945664
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:37.433176+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:38.433424+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:39.433632+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:40.433805+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:41.434246+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217686 data_alloc: 218103808 data_used: 8949760
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:42.434499+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:43.434686+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:44.434902+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:45.435088+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:46.435398+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217838 data_alloc: 218103808 data_used: 8953856
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:47.435586+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102187008 unmapped: 4136960 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:48.435822+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102187008 unmapped: 4136960 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:49.436047+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102187008 unmapped: 4136960 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099069400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099069400 session 0x55b0991de960
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:50.436213+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 2875392 heap: 107372544 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099068c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.675640106s of 26.806079865s, submitted: 44
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099068c00 session 0x55b098fe1a40
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099068800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099068800 session 0x55b0988ae5a0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099068400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099068400 session 0x55b09722e960
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b096bfb860
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099068800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099068800 session 0x55b099432960
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:51.436437+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293835 data_alloc: 218103808 data_used: 8970240
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:52.436588+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b098fe0b40
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099069c00 session 0x55b098e28960
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:53.436835+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97fa000/0x0/0x4ffc00000, data 0x1dac496/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:54.437040+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:55.437219+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:56.437401+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293835 data_alloc: 218103808 data_used: 8970240
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099068c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099068c00 session 0x55b0988af680
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:57.437613+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:58.437806+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97fa000/0x0/0x4ffc00000, data 0x1dac496/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b099433860
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:59.438080+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b0988cc000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099068800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099068800 session 0x55b0988dcb40
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:00.438281+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 18071552 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97fa000/0x0/0x4ffc00000, data 0x1dac496/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099069c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.327645302s of 10.456887245s, submitted: 32
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099069400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:01.438473+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 18071552 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1295076 data_alloc: 218103808 data_used: 8974336
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:02.438652+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 18071552 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:03.438834+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 18071552 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099082400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97fa000/0x0/0x4ffc00000, data 0x1dac496/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:04.439000+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108740608 unmapped: 14508032 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:05.439903+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109838336 unmapped: 13410304 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97fa000/0x0/0x4ffc00000, data 0x1dac496/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:06.440641+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 13393920 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361328 data_alloc: 234881024 data_used: 16445440
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:07.441090+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 13393920 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:08.441610+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 13393920 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:09.442185+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 13393920 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099107400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:10.442643+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 13393920 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97fa000/0x0/0x4ffc00000, data 0x1dac496/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:11.443439+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 13393920 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364016 data_alloc: 234881024 data_used: 16445440
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:12.444054+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 13393920 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:13.444799+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109838336 unmapped: 13410304 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.515455246s of 12.531072617s, submitted: 5
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:14.444984+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 8232960 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8bfa000/0x0/0x4ffc00000, data 0x29a6496/0x2a6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:15.445259+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 7266304 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:16.445700+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 7266304 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1475602 data_alloc: 234881024 data_used: 17408000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:17.445921+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115990528 unmapped: 7258112 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8bb4000/0x0/0x4ffc00000, data 0x29e3496/0x2aa9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:18.446112+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 116023296 unmapped: 7225344 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:19.446367+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 116023296 unmapped: 7225344 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:20.446547+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 116023296 unmapped: 7225344 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:21.446718+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 8200192 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1466414 data_alloc: 234881024 data_used: 17408000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8bc0000/0x0/0x4ffc00000, data 0x29e6496/0x2aac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:22.446886+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 8200192 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:23.447091+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115056640 unmapped: 8192000 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.086705208s of 10.319118500s, submitted: 125
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099069c00 session 0x55b0988afa40
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099069400 session 0x55b09638f0e0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:24.447249+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8bc0000/0x0/0x4ffc00000, data 0x29e6496/0x2aac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b098f9f680
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 16113664 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:25.447421+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 16113664 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:26.447587+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 16113664 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207454 data_alloc: 218103808 data_used: 5505024
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:27.447817+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 16113664 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:28.448016+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa430000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 16113664 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:29.448192+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 16113664 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026800 session 0x55b0990081e0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027400 session 0x55b09905c5a0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa430000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:30.448312+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105947136 unmapped: 17301504 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09840d4a0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:31.448511+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130516 data_alloc: 218103808 data_used: 3641344
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:32.448636+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:33.448853+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:34.449080+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:35.449268+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:36.449482+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130516 data_alloc: 218103808 data_used: 3641344
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:37.449692+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:38.449877+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:39.450104+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:40.450248+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:41.450407+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130516 data_alloc: 218103808 data_used: 3641344
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:42.450577+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:43.450751+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:44.450943+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:45.451173+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:46.451397+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130516 data_alloc: 218103808 data_used: 3641344
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:47.451609+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:48.451768+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:49.451984+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:50.452150+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:51.452350+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130516 data_alloc: 218103808 data_used: 3641344
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:52.452519+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:53.452694+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:54.452854+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:55.452996+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099068800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099068800 session 0x55b097a68d20
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b0987bfa40
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026800 session 0x55b0993723c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09936c960
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 31.987216949s of 32.222537994s, submitted: 83
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:56.453167+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027400 session 0x55b096d5ed20
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099069c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099069c00 session 0x55b098681860
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b098e292c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026800 session 0x55b0970df2c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b0970ded20
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144104 data_alloc: 218103808 data_used: 3641344
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:57.453386+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:58.453533+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:59.453769+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab87000/0x0/0x4ffc00000, data 0xa1e4a6/0xae5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab87000/0x0/0x4ffc00000, data 0xa1e4a6/0xae5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026000 session 0x55b0987bd2c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:00.453959+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:01.454179+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144104 data_alloc: 218103808 data_used: 3641344
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:02.454404+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:03.454560+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027400 session 0x55b09905d860
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:04.454701+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026000 session 0x55b09723cb40
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab87000/0x0/0x4ffc00000, data 0xa1e4a6/0xae5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b0987be5a0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026800 session 0x55b098f9e1e0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab87000/0x0/0x4ffc00000, data 0xa1e4a6/0xae5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:05.454840+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104292352 unmapped: 18956288 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:06.454960+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104300544 unmapped: 18948096 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153680 data_alloc: 218103808 data_used: 4112384
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:07.455082+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab62000/0x0/0x4ffc00000, data 0xa424c9/0xb0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104308736 unmapped: 18939904 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:08.455251+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab62000/0x0/0x4ffc00000, data 0xa424c9/0xb0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104308736 unmapped: 18939904 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:09.456245+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104308736 unmapped: 18939904 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:10.456737+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104308736 unmapped: 18939904 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.687290192s of 14.744665146s, submitted: 17
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:11.456927+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab62000/0x0/0x4ffc00000, data 0xa424c9/0xb0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104316928 unmapped: 18931712 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154724 data_alloc: 218103808 data_used: 4239360
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:12.457651+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104316928 unmapped: 18931712 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:13.458113+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104316928 unmapped: 18931712 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab62000/0x0/0x4ffc00000, data 0xa424c9/0xb0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:14.458506+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104316928 unmapped: 18931712 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:15.458710+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104316928 unmapped: 18931712 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:16.459069+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab62000/0x0/0x4ffc00000, data 0xa424c9/0xb0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104316928 unmapped: 18931712 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159212 data_alloc: 218103808 data_used: 4243456
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:17.459420+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 107143168 unmapped: 16105472 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:18.459603+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 16080896 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:19.459824+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108568576 unmapped: 14680064 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa4ce000/0x0/0x4ffc00000, data 0x10c04c9/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:20.460075+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106143744 unmapped: 17104896 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:21.460371+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106143744 unmapped: 17104896 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213196 data_alloc: 218103808 data_used: 4591616
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:22.460566+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106143744 unmapped: 17104896 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:23.460720+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106143744 unmapped: 17104896 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:24.460875+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.378688812s of 13.587653160s, submitted: 76
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106143744 unmapped: 17104896 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:25.461046+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa4e4000/0x0/0x4ffc00000, data 0x10c04c9/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106143744 unmapped: 17104896 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:26.461225+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 17096704 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213232 data_alloc: 218103808 data_used: 4591616
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa4e4000/0x0/0x4ffc00000, data 0x10c04c9/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:27.461443+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 17096704 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:28.461668+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 17096704 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:29.461880+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:30.462062+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:31.462226+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa4e4000/0x0/0x4ffc00000, data 0x10c04c9/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213232 data_alloc: 218103808 data_used: 4591616
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:32.462416+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:33.462597+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:34.462768+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:35.462973+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa4e4000/0x0/0x4ffc00000, data 0x10c04c9/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b098680960
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6400 session 0x55b0994323c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:36.463236+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213232 data_alloc: 218103808 data_used: 4591616
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:37.463450+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:38.463676+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:39.463912+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:40.464044+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:41.464243+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa4e4000/0x0/0x4ffc00000, data 0x10c04c9/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213232 data_alloc: 218103808 data_used: 4591616
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:42.464434+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa4e4000/0x0/0x4ffc00000, data 0x10c04c9/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106168320 unmapped: 17080320 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:43.464637+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106168320 unmapped: 17080320 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:44.464830+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106168320 unmapped: 17080320 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:45.465035+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b096d7fe00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.899662018s of 20.906446457s, submitted: 2
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6000 session 0x55b09723cd20
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa4e4000/0x0/0x4ffc00000, data 0x10c04c9/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104374272 unmapped: 18874368 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b098857a40
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:46.465256+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1143667 data_alloc: 218103808 data_used: 3641344
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:47.465501+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:48.465722+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:49.465964+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:50.466111+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:51.466434+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1143667 data_alloc: 218103808 data_used: 3641344
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:52.466587+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:53.466809+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:54.466964+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:55.467275+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.475157738s of 10.608925819s, submitted: 41
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:56.467574+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1142193 data_alloc: 218103808 data_used: 3641344
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:57.467810+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:58.467974+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:59.468204+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:00.468424+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:01.468588+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:02.468823+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141470 data_alloc: 218103808 data_used: 3641344
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:03.468975+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:04.469174+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:05.469402+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:06.469618+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:07.469790+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141470 data_alloc: 218103808 data_used: 3641344
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:08.470017+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099107400 session 0x55b096c4a5a0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099082400 session 0x55b0988dcf00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:09.470243+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:10.470395+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:11.470531+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:12.470676+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141470 data_alloc: 218103808 data_used: 3641344
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099082400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.499574661s of 16.514310837s, submitted: 4
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099082400 session 0x55b096c4af00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6000 session 0x55b096c4a3c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b09936da40
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09936cf00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099107400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099107400 session 0x55b0972781e0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104521728 unmapped: 19783680 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:13.470936+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104521728 unmapped: 19783680 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:14.475921+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104521728 unmapped: 19783680 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:15.476259+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47d000/0x0/0x4ffc00000, data 0xd19496/0xddf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104521728 unmapped: 19783680 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:16.477222+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104521728 unmapped: 19783680 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:17.477497+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173157 data_alloc: 218103808 data_used: 3641344
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104538112 unmapped: 19767296 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6000 session 0x55b09874a780
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:18.478051+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104538112 unmapped: 19767296 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:19.478291+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd194b9/0xde0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104554496 unmapped: 19750912 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:20.478438+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104636416 unmapped: 19668992 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd194b9/0xde0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:21.478616+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105037824 unmapped: 19267584 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:22.479802+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201011 data_alloc: 218103808 data_used: 7344128
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105037824 unmapped: 19267584 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:23.480308+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105037824 unmapped: 19267584 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:24.480469+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd194b9/0xde0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105037824 unmapped: 19267584 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:25.480597+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd194b9/0xde0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105046016 unmapped: 19259392 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:26.480729+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105046016 unmapped: 19259392 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:27.480899+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201011 data_alloc: 218103808 data_used: 7344128
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105046016 unmapped: 19259392 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:28.481446+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105046016 unmapped: 19259392 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:29.482175+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd194b9/0xde0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105046016 unmapped: 19259392 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:30.482570+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd194b9/0xde0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.522539139s of 18.680767059s, submitted: 28
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 19152896 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:31.482691+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109240320 unmapped: 15065088 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:32.482826+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280927 data_alloc: 218103808 data_used: 7426048
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 16277504 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:33.483269+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99c6000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99c6000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 16277504 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:34.483550+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 16277504 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:35.483912+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 16277504 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:36.484206+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 16277504 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:37.484563+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99c6000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280795 data_alloc: 218103808 data_used: 7426048
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 16277504 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:38.484730+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b097aa5c00 session 0x55b098fe4000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096d95c00 session 0x55b098f78f00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 16269312 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:39.485000+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 16269312 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:40.485204+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 16269312 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:41.485426+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99c6000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 16269312 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:42.485704+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280811 data_alloc: 218103808 data_used: 7426048
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 16261120 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:43.485913+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 16261120 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:44.486120+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 16261120 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:45.486378+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99c6000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99c6000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 16261120 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:46.486627+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 16261120 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:47.486769+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280811 data_alloc: 218103808 data_used: 7426048
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99c6000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b098e28000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7400 session 0x55b09936de00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096d95c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096d95c00 session 0x55b097278d20
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108060672 unmapped: 16244736 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:48.487052+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b097aa5c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b097aa5c00 session 0x55b09936d4a0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 11780096 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:49.487311+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6000 session 0x55b099432d20
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.515605927s of 18.667829514s, submitted: 60
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b0991dfc20
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7800 session 0x55b098e281e0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99c6000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096d95c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096d95c00 session 0x55b097a68d20
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b097aa5c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b097aa5c00 session 0x55b0994325a0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6000 session 0x55b096ddd4a0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 19636224 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:50.487508+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 19636224 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:51.487673+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f90d1000/0x0/0x4ffc00000, data 0x20c252b/0x218b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:52.487840+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 19636224 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361043 data_alloc: 234881024 data_used: 10899456
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b0982843c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:53.488315+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 19636224 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:54.488674+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 19636224 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a094000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094000 session 0x55b096d112c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096d95c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096d95c00 session 0x55b098284960
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b097aa5c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b097aa5c00 session 0x55b096d7f2c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:55.488869+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113057792 unmapped: 19644416 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:56.489024+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113057792 unmapped: 19644416 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:57.489151+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 116203520 unmapped: 16498688 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420911 data_alloc: 234881024 data_used: 19783680
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f90d1000/0x0/0x4ffc00000, data 0x20c252b/0x218b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f90d1000/0x0/0x4ffc00000, data 0x20c252b/0x218b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:58.489557+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:59.489800+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f90d1000/0x0/0x4ffc00000, data 0x20c252b/0x218b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:00.489987+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:01.490227+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f90d1000/0x0/0x4ffc00000, data 0x20c252b/0x218b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:02.490429+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1423343 data_alloc: 234881024 data_used: 20115456
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:03.490653+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:04.490870+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b098fe4b40
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026000 session 0x55b099433c20
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:05.491056+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 12410880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f90d1000/0x0/0x4ffc00000, data 0x20c252b/0x218b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:06.491244+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 12410880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.933946609s of 17.117507935s, submitted: 45
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:07.491391+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 9232384 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1454755 data_alloc: 234881024 data_used: 20537344
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:08.491538+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123224064 unmapped: 9478144 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:09.491745+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123224064 unmapped: 9478144 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:10.491891+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8d37000/0x0/0x4ffc00000, data 0x244652b/0x250f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123224064 unmapped: 9478144 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:11.492043+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123232256 unmapped: 9469952 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8d37000/0x0/0x4ffc00000, data 0x244652b/0x250f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:12.492206+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123232256 unmapped: 9469952 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464521 data_alloc: 234881024 data_used: 20365312
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:13.492432+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123265024 unmapped: 9437184 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:14.492615+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8d37000/0x0/0x4ffc00000, data 0x244652b/0x250f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123265024 unmapped: 9437184 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6000 session 0x55b099432d20
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b097a154a0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096d95c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:15.492745+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096d95c00 session 0x55b0988cd0e0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b097aa5c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:16.492898+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:17.493084+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296267 data_alloc: 234881024 data_used: 10899456
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f957d000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:18.493301+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:19.496084+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:20.496502+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f957d000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:21.498072+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:22.498315+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296267 data_alloc: 234881024 data_used: 10899456
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.756012917s of 16.126758575s, submitted: 124
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b0987be1e0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09586ef00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:23.498473+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026000 session 0x55b09638e3c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x9784b9/0xa3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:24.498768+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 21069824 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 11K writes, 41K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 3034 syncs, 3.72 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2207 writes, 6322 keys, 2207 commit groups, 1.0 writes per commit group, ingest: 6.08 MB, 0.01 MB/s
                                           Interval WAL: 2207 writes, 970 syncs, 2.28 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:25.498911+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 21061632 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:26.499090+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 21061632 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:27.499225+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 21061632 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166769 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:28.499394+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 21061632 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:29.499658+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 21061632 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:30.499863+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 21061632 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:31.500034+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 21061632 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:32.500207+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 21053440 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166637 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:33.500386+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 21053440 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:34.500623+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 21053440 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:35.500837+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 21045248 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:36.501102+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 21045248 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:37.501291+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 21037056 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166637 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:38.501534+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 21037056 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:39.502124+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 21037056 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:40.502515+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 21037056 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:41.502720+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 21037056 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:42.503131+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 21037056 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166637 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:43.503452+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 21037056 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:44.503697+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 21037056 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:45.503871+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 21028864 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:46.504052+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 21028864 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:47.504271+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 21028864 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166637 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:48.504478+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 21028864 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:49.504749+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 21028864 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096d95c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096d95c00 session 0x55b096dddc20
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b097a15860
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b099008f00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b099009a40
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.991001129s of 27.170951843s, submitted: 56
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:50.504935+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b097a15e00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096d95c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096d95c00 session 0x55b096c4b860
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b096c4a5a0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b096c4ba40
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b096d7f2c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 21020672 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:51.505437+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 21020672 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:52.505683+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 21020672 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205559 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:53.505914+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa313000/0x0/0x4ffc00000, data 0xe824a6/0xf49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 21020672 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:54.506224+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 21020672 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa313000/0x0/0x4ffc00000, data 0xe824a6/0xf49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:55.506455+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 21012480 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:56.506665+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 21012480 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a094400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094400 session 0x55b0994fde00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:57.507488+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096d95c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 21012480 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205559 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:58.507646+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 20996096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:59.507801+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 20996096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa313000/0x0/0x4ffc00000, data 0xe824a6/0xf49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:00.507973+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112394240 unmapped: 20307968 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa313000/0x0/0x4ffc00000, data 0xe824a6/0xf49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:01.508299+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112394240 unmapped: 20307968 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:02.508570+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa313000/0x0/0x4ffc00000, data 0xe824a6/0xf49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112402432 unmapped: 20299776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1241127 data_alloc: 234881024 data_used: 12398592
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:03.508785+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112402432 unmapped: 20299776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:04.508995+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112402432 unmapped: 20299776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:05.509210+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112402432 unmapped: 20299776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa313000/0x0/0x4ffc00000, data 0xe824a6/0xf49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:06.509473+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa313000/0x0/0x4ffc00000, data 0xe824a6/0xf49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112402432 unmapped: 20299776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:07.509643+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112402432 unmapped: 20299776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1241127 data_alloc: 234881024 data_used: 12398592
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:08.509780+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112402432 unmapped: 20299776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.066671371s of 19.101375580s, submitted: 6
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:09.509962+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 15245312 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:10.510123+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 14950400 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:11.510287+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117932032 unmapped: 14770176 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:12.510448+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117940224 unmapped: 14761984 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310445 data_alloc: 234881024 data_used: 13742080
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:13.510638+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117940224 unmapped: 14761984 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:14.510849+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 14753792 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:15.511076+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 14753792 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:16.511309+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 14753792 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:17.511498+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 14753792 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310445 data_alloc: 234881024 data_used: 13742080
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:18.511687+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 14753792 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:19.511892+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 14745600 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:20.512079+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 14745600 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:21.512450+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 14745600 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:22.512614+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 14745600 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310445 data_alloc: 234881024 data_used: 13742080
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:23.512840+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 14745600 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:24.513040+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 14745600 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:25.513266+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:26.513445+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:27.513597+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310445 data_alloc: 234881024 data_used: 13742080
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:28.513736+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:29.513949+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:30.514130+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:31.514279+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:32.514434+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310445 data_alloc: 234881024 data_used: 13742080
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:33.514661+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:34.514847+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:35.514981+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.543247223s of 26.644886017s, submitted: 51
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b099009680
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b098f792c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a094800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b098e29680
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a094c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094c00 session 0x55b09874fe00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:36.515128+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a095000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a095000 session 0x55b096ddd680
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117538816 unmapped: 15163392 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99d9000/0x0/0x4ffc00000, data 0x17bc4a6/0x1883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:37.515382+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117538816 unmapped: 15163392 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332538 data_alloc: 234881024 data_used: 13742080
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:38.515578+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117538816 unmapped: 15163392 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:39.515830+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117538816 unmapped: 15163392 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:40.516039+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117538816 unmapped: 15163392 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b0994fc5a0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:41.516459+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a094800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117538816 unmapped: 15163392 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:42.516751+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99d9000/0x0/0x4ffc00000, data 0x17bc4a6/0x1883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117833728 unmapped: 14868480 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339966 data_alloc: 234881024 data_used: 14827520
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:43.516927+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117833728 unmapped: 14868480 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:44.517111+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99d9000/0x0/0x4ffc00000, data 0x17bc4a6/0x1883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117850112 unmapped: 14852096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:45.517417+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99d9000/0x0/0x4ffc00000, data 0x17bc4a6/0x1883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117850112 unmapped: 14852096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:46.517587+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117850112 unmapped: 14852096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:47.517757+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117850112 unmapped: 14852096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1340574 data_alloc: 234881024 data_used: 14888960
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:48.517994+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99d9000/0x0/0x4ffc00000, data 0x17bc4a6/0x1883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117850112 unmapped: 14852096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:49.518262+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117850112 unmapped: 14852096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:50.518458+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117800960 unmapped: 14901248 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:51.518622+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117800960 unmapped: 14901248 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99d9000/0x0/0x4ffc00000, data 0x17bc4a6/0x1883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:52.518807+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.697357178s of 16.766319275s, submitted: 23
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119865344 unmapped: 12836864 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1388676 data_alloc: 234881024 data_used: 15142912
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:53.518976+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121815040 unmapped: 10887168 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:54.519209+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 12115968 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:55.519413+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 12115968 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9345000/0x0/0x4ffc00000, data 0x1e4a4a6/0x1f11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:56.519629+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 12115968 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:57.519904+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 12115968 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1393116 data_alloc: 234881024 data_used: 14974976
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:58.520116+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 12115968 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:59.520379+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 12115968 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:00.520546+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 12451840 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:01.520780+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9327000/0x0/0x4ffc00000, data 0x1e6e4a6/0x1f35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9327000/0x0/0x4ffc00000, data 0x1e6e4a6/0x1f35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 12451840 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:02.521007+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 12451840 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392508 data_alloc: 234881024 data_used: 14974976
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:03.521165+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120258560 unmapped: 12443648 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:04.521395+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120258560 unmapped: 12443648 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:05.521521+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 12435456 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:06.521787+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 12435456 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:07.521966+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9327000/0x0/0x4ffc00000, data 0x1e6e4a6/0x1f35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 12435456 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392508 data_alloc: 234881024 data_used: 14974976
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:08.522165+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 12435456 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:09.522511+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9327000/0x0/0x4ffc00000, data 0x1e6e4a6/0x1f35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 12435456 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9327000/0x0/0x4ffc00000, data 0x1e6e4a6/0x1f35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:10.522694+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:11.522918+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9327000/0x0/0x4ffc00000, data 0x1e6e4a6/0x1f35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.087865829s of 19.344846725s, submitted: 102
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:12.523082+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392228 data_alloc: 234881024 data_used: 14974976
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:13.523255+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9324000/0x0/0x4ffc00000, data 0x1e714a6/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:14.523456+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:15.523642+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:16.523915+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9324000/0x0/0x4ffc00000, data 0x1e714a6/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:17.524096+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392228 data_alloc: 234881024 data_used: 14974976
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:18.524445+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:19.524720+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:20.524940+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:21.525128+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:22.525413+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9324000/0x0/0x4ffc00000, data 0x1e714a6/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392228 data_alloc: 234881024 data_used: 14974976
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:23.525615+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:24.525832+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.535310745s of 12.545021057s, submitted: 2
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:25.526011+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:26.526211+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:27.526389+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9324000/0x0/0x4ffc00000, data 0x1e714a6/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 12410880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392396 data_alloc: 234881024 data_used: 14974976
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:28.526603+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 12410880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:29.526891+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9324000/0x0/0x4ffc00000, data 0x1e714a6/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0963a4800 session 0x55b099009860
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a095800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 12410880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:30.527084+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 12410880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:31.527230+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9324000/0x0/0x4ffc00000, data 0x1e714a6/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 12410880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:32.527479+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 12410880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:33.527736+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392396 data_alloc: 234881024 data_used: 14974976
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120356864 unmapped: 12345344 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:34.528006+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.840996742s of 10.006482124s, submitted: 55
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120422400 unmapped: 12279808 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:35.528137+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9324000/0x0/0x4ffc00000, data 0x1e714a6/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,1])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 12107776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:36.528472+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 12107776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:37.528681+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 12107776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:38.528870+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391892 data_alloc: 234881024 data_used: 14974976
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 12107776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:39.529212+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9324000/0x0/0x4ffc00000, data 0x1e714a6/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 12107776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:40.529572+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 12107776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09905c3c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b0988ddc20
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:41.529848+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a095c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a095c00 session 0x55b0970df4a0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 13443072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:42.530093+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 13443072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:43.530267+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315902 data_alloc: 234881024 data_used: 13803520
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 13443072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:44.530514+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 13434880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:45.530691+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 13434880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:46.530809+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 13434880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:47.531030+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 13426688 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:48.531293+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315902 data_alloc: 234881024 data_used: 13803520
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096d95c00 session 0x55b0994fda40
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b096d7e5a0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.445354462s of 14.472743034s, submitted: 373
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:49.531534+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b096ddc3c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:50.531674+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:51.531873+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:52.532110+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:53.532388+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181100 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:54.532585+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:55.532779+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:56.533090+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:57.533291+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:58.533530+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181100 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115228672 unmapped: 17473536 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:59.533827+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115228672 unmapped: 17473536 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:00.534013+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 17465344 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:01.534218+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 17465344 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:02.534423+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 17465344 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:03.534599+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181100 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 17465344 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:04.534765+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 17465344 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:05.534948+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 17465344 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:06.535158+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 17457152 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:07.535453+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 17457152 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:08.535804+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181100 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 17457152 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:09.535966+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 17457152 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:10.536144+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 17457152 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:11.536364+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 17457152 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:12.536633+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 17457152 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:13.536807+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181100 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 17457152 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:14.536993+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 17448960 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:15.537239+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b0987bc000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a094800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b098681860
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a095c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a095c00 session 0x55b098f794a0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b098f783c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.050231934s of 27.103757858s, submitted: 15
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 24707072 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b09638fe00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:16.537383+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b0994332c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a094800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b099433e00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a4a8000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a4a8000 session 0x55b0994321e0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b09586ef00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 24690688 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:17.537586+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9df6000/0x0/0x4ffc00000, data 0x139f4a6/0x1466000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 24690688 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:18.537844+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263995 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 24690688 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:19.538098+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b0994fcd20
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a094800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 24387584 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:20.538296+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 24387584 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:21.538472+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9dd2000/0x0/0x4ffc00000, data 0x13c34a6/0x148a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:22.538727+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 118431744 unmapped: 21086208 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:23.538922+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119930880 unmapped: 19587072 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336483 data_alloc: 234881024 data_used: 17555456
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:24.539164+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119930880 unmapped: 19587072 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:25.539430+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119930880 unmapped: 19587072 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.006184578s of 10.114780426s, submitted: 27
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b096c4a3c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b097a14b40
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:26.540138+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119922688 unmapped: 19595264 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9dd2000/0x0/0x4ffc00000, data 0x13c34a6/0x148a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a4a8400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a4a8400 session 0x55b097a69860
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:27.540318+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:28.540716+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189775 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:29.541113+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:30.541442+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:31.541681+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:32.541903+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:33.545837+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189775 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:34.549097+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:35.550470+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:36.552975+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:37.553968+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:38.555754+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189775 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b0982841e0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b096d5eb40
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b0988dcd20
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a094800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b09638f0e0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a4a8800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.877738953s of 12.966034889s, submitted: 31
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:39.556030+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a4a8800 session 0x55b098f8bc20
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b096e1da40
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b098fe41e0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 26058752 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09874b2c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a094800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b0990092c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:40.556769+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113467392 unmapped: 26050560 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:41.557412+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 26042368 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47b000/0x0/0x4ffc00000, data 0xd19508/0xde1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:42.558082+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 26042368 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:43.558353+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 26042368 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223186 data_alloc: 218103808 data_used: 7118848
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a4a8c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a4a8c00 session 0x55b09723de00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:44.560183+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 26042368 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b098857e00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47b000/0x0/0x4ffc00000, data 0xd19508/0xde1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b09638f4a0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:45.560691+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09936de00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113500160 unmapped: 26017792 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a094800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a4a9000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa479000/0x0/0x4ffc00000, data 0xd1953b/0xde3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:46.560891+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113500160 unmapped: 26017792 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: mgrc ms_handle_reset ms_handle_reset con 0x55b096656000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/194506248
Oct 10 10:23:01 compute-1 ceph-osd[76867]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/194506248,v1:192.168.122.100:6801/194506248]
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: get_auth_request con 0x55b09a4a8c00 auth_method 0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: mgrc handle_mgr_configure stats_period=5
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:47.561184+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113582080 unmapped: 25935872 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096dcac00 session 0x55b0988cc5a0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096dcb000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099068000 session 0x55b0991deb40
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096dcac00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:48.561428+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113582080 unmapped: 25935872 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237429 data_alloc: 218103808 data_used: 8724480
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:49.562116+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 25427968 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa479000/0x0/0x4ffc00000, data 0xd1953b/0xde3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:50.562405+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 25427968 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:51.562896+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 25419776 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:52.563070+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 25419776 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b097a15680
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a4a9000 session 0x55b096e1cd20
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.847922325s of 13.942553520s, submitted: 33
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:53.563222+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b097a15860
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa479000/0x0/0x4ffc00000, data 0xd1953b/0xde3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196900 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:54.563448+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:55.563612+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81c000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:56.563749+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:57.563951+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81c000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:58.564178+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196900 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:59.564439+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:00.564637+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:01.564815+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81c000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:02.565009+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:03.565254+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81c000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196900 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:04.565437+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:05.565578+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:06.565765+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:07.565983+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81c000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:08.566252+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196900 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:09.566590+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:10.566782+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:11.566965+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113336320 unmapped: 26181632 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81c000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:12.567141+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 26173440 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81c000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:13.567278+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 26173440 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196900 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:14.567446+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 26173440 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:15.567641+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 26173440 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b096c4af00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b0988dd4a0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a094800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b098f8b2c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a4a9800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a4a9800 session 0x55b097a68780
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.952396393s of 23.063278198s, submitted: 31
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81b000/0x0/0x4ffc00000, data 0x9784bf/0xa3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:16.567843+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b0991df860
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b098e28000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09723cb40
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 26165248 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a094800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b0986803c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a4a9c00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a4a9c00 session 0x55b098680000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:17.568043+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 26165248 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:18.568245+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 26165248 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239657 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa21d000/0x0/0x4ffc00000, data 0xf784f8/0x103f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:19.568508+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113360896 unmapped: 26157056 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:20.568670+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113360896 unmapped: 26157056 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:21.568862+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113369088 unmapped: 26148864 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa21d000/0x0/0x4ffc00000, data 0xf784f8/0x103f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:22.569095+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 26116096 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:23.569266+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 26116096 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239657 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b098681860
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:24.569456+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b098680b40
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 26116096 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa21d000/0x0/0x4ffc00000, data 0xf784f8/0x103f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b0988cc1e0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:25.569653+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a094800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa21d000/0x0/0x4ffc00000, data 0xf784f8/0x103f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b09874ab40
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 26107904 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09bade000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09bade400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:26.569851+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 26099712 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:27.569953+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 25485312 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:28.570121+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa21c000/0x0/0x4ffc00000, data 0xf78508/0x1040000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114622464 unmapped: 24895488 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283703 data_alloc: 234881024 data_used: 13406208
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:29.570306+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114630656 unmapped: 24887296 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:30.570515+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114630656 unmapped: 24887296 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:31.570724+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114630656 unmapped: 24887296 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:32.570892+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114630656 unmapped: 24887296 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:33.571077+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa21c000/0x0/0x4ffc00000, data 0xf78508/0x1040000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114630656 unmapped: 24887296 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283703 data_alloc: 234881024 data_used: 13406208
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa21c000/0x0/0x4ffc00000, data 0xf78508/0x1040000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:34.571230+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114630656 unmapped: 24887296 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:35.571391+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114638848 unmapped: 24879104 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:36.571578+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114638848 unmapped: 24879104 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.799320221s of 20.981313705s, submitted: 25
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:37.571743+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 118800384 unmapped: 20717568 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa1a5000/0x0/0x4ffc00000, data 0xfef508/0x10b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:38.571902+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120127488 unmapped: 19390464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9ec9000/0x0/0x4ffc00000, data 0x12cb508/0x1393000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320379 data_alloc: 234881024 data_used: 13844480
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:39.572114+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120127488 unmapped: 19390464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:40.572429+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120127488 unmapped: 19390464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:41.572602+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120127488 unmapped: 19390464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:42.572802+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120135680 unmapped: 19382272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:43.572982+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120135680 unmapped: 19382272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320379 data_alloc: 234881024 data_used: 13844480
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9ea1000/0x0/0x4ffc00000, data 0x12f3508/0x13bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:44.573152+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120135680 unmapped: 19382272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9ea1000/0x0/0x4ffc00000, data 0x12f3508/0x13bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:45.573423+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 19374080 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:46.573642+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 19374080 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9e9f000/0x0/0x4ffc00000, data 0x12f5508/0x13bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:47.573897+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 19374080 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:48.574194+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 19374080 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320655 data_alloc: 234881024 data_used: 13844480
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:49.574457+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 19374080 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:50.574674+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9e9f000/0x0/0x4ffc00000, data 0x12f5508/0x13bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120152064 unmapped: 19365888 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:51.574885+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.374687195s of 14.491823196s, submitted: 54
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 20094976 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09bade000 session 0x55b09874a5a0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09bade400 session 0x55b0990094a0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:52.575037+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 20094976 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b098f794a0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:53.575266+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204502 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:54.575487+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa485000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:55.575711+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:56.575984+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:57.576236+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:58.576472+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204502 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:59.576742+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:00.577021+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa485000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa485000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:01.577184+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:02.577439+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa485000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:03.577645+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204502 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:04.577816+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:05.578078+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:06.578285+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa485000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:07.578512+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:08.578713+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204502 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:09.578912+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:10.579080+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa485000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:11.579221+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:12.579440+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:13.579565+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204502 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:14.579716+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa485000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:15.579839+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:16.580008+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:17.580255+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.981967926s of 26.068605423s, submitted: 26
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b098284780
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 24502272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b098fe54a0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a094800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b09723d2c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b0970df680
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b098f8a3c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:18.580406+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 24502272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223317 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:19.580582+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 24502272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa613000/0x0/0x4ffc00000, data 0xb824f8/0xc49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:20.580724+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 24502272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:21.580900+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa613000/0x0/0x4ffc00000, data 0xb824f8/0xc49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 24502272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b098f8b4a0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:22.581062+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09bade400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09bade800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 24502272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:23.581208+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa613000/0x0/0x4ffc00000, data 0xb824f8/0xc49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 24502272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226965 data_alloc: 218103808 data_used: 7639040
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:24.581389+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 24502272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:25.581563+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa613000/0x0/0x4ffc00000, data 0xb824f8/0xc49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 24469504 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:26.581736+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 24469504 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:27.581893+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 24469504 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:28.582047+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 24469504 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237757 data_alloc: 218103808 data_used: 9252864
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:29.582297+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa613000/0x0/0x4ffc00000, data 0xb824f8/0xc49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 24436736 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:30.582488+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 24436736 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:31.582661+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 24436736 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:32.582839+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 24436736 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:33.583061+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.933888435s of 16.003017426s, submitted: 22
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119988224 unmapped: 19529728 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307679 data_alloc: 218103808 data_used: 9330688
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:34.583249+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9c31000/0x0/0x4ffc00000, data 0x15644f8/0x162b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 118054912 unmapped: 21463040 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:35.583437+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09badec00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 118071296 unmapped: 21446656 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09badec00 session 0x55b098fe4780
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09badf000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09badf000 session 0x55b09936c3c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09badf000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09badf000 session 0x55b09936d860
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b09638f4a0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:36.583623+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b09900ad20
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120176640 unmapped: 19341312 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:37.583923+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120176640 unmapped: 19341312 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f96d1000/0x0/0x4ffc00000, data 0x16b44f8/0x177b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:38.584151+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120176640 unmapped: 19341312 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341734 data_alloc: 234881024 data_used: 10129408
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:39.584439+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120176640 unmapped: 19341312 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:40.584700+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120176640 unmapped: 19341312 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f96d1000/0x0/0x4ffc00000, data 0x16b44f8/0x177b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:41.584975+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120176640 unmapped: 19341312 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b098fe5a40
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:42.585113+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09badec00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09badf400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119898112 unmapped: 19619840 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:43.585287+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120004608 unmapped: 19513344 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1348526 data_alloc: 234881024 data_used: 10899456
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:44.585440+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:45.586462+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f96ad000/0x0/0x4ffc00000, data 0x16d84f8/0x179f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:46.587289+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:47.588140+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:48.588559+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1349742 data_alloc: 234881024 data_used: 11075584
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:49.589254+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:50.589613+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f96ad000/0x0/0x4ffc00000, data 0x16d84f8/0x179f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:51.590231+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:52.590427+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:53.590732+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1349742 data_alloc: 234881024 data_used: 11075584
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:54.590905+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.355663300s of 20.596637726s, submitted: 92
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122126336 unmapped: 17391616 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:55.591217+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122126336 unmapped: 17391616 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:56.591452+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f92d1000/0x0/0x4ffc00000, data 0x1aa64f8/0x1b6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 16556032 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:57.591703+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 16556032 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:58.591861+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9246000/0x0/0x4ffc00000, data 0x1b374f8/0x1bfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 16556032 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1396772 data_alloc: 234881024 data_used: 12251136
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:59.592161+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9246000/0x0/0x4ffc00000, data 0x1b374f8/0x1bfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 16556032 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:00.592416+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 16556032 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:01.592640+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 16556032 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:02.592761+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9246000/0x0/0x4ffc00000, data 0x1b374f8/0x1bfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:03.592954+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391276 data_alloc: 234881024 data_used: 12251136
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:04.593123+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:05.593262+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:06.593485+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:07.593778+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f922d000/0x0/0x4ffc00000, data 0x1b584f8/0x1c1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:08.593922+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391276 data_alloc: 234881024 data_used: 12251136
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:09.594241+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:10.594449+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:11.594764+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:12.594918+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.976144791s of 18.175519943s, submitted: 91
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f922d000/0x0/0x4ffc00000, data 0x1b584f8/0x1c1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123183104 unmapped: 16334848 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:13.595196+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123183104 unmapped: 16334848 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391364 data_alloc: 234881024 data_used: 12251136
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:14.595414+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09badec00 session 0x55b098fe41e0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09badf400 session 0x55b0988cd0e0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123166720 unmapped: 16351232 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:15.595526+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b0994fcf00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97ee000/0x0/0x4ffc00000, data 0x15974f8/0x165e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120684544 unmapped: 18833408 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:16.595652+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120684544 unmapped: 18833408 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:17.595786+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120684544 unmapped: 18833408 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:18.595948+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120684544 unmapped: 18833408 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327065 data_alloc: 234881024 data_used: 10133504
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:19.596180+0000)
Oct 10 10:23:01 compute-1 ceph-mon[79167]: from='client.16911 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:01 compute-1 ceph-mon[79167]: from='client.26030 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:01 compute-1 ceph-mon[79167]: from='client.26473 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:01 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/356895798' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 10:23:01 compute-1 ceph-mon[79167]: from='client.16926 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:01 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/320126284' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 10:23:01 compute-1 ceph-mon[79167]: from='client.26042 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:01 compute-1 ceph-mon[79167]: from='client.16932 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:01 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1569461498' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 10:23:01 compute-1 ceph-mon[79167]: from='client.16944 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:01 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2296408099' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 10:23:01 compute-1 ceph-mon[79167]: from='client.26057 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:01 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:23:01 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/127520907' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 10:23:01 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2579803735' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 10:23:01 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/196607994' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09bade400 session 0x55b09874a3c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09bade800 session 0x55b098e29c20
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120684544 unmapped: 18833408 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:20.596314+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b096c4b860
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:21.596497+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa159000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:22.596671+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:23.596881+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221069 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:24.597056+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa159000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:25.597301+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:26.597552+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:27.597805+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa159000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:28.597999+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221069 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:29.598189+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:30.598447+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:31.598664+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa159000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:32.598852+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:33.599043+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221069 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:34.599166+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:35.599464+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa159000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:36.599753+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:37.600009+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:38.600493+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:39.600836+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221069 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:40.601031+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa159000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:41.601188+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:42.601443+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa159000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:43.601638+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa159000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:44.601880+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221069 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:45.602079+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:46.602276+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.100059509s of 34.283664703s, submitted: 59
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b09723cd20
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09bade400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09bade400 session 0x55b09723c960
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09bade800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09bade800 session 0x55b096d7f2c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09badf400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09badf400 session 0x55b096d7e5a0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b096d7e3c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 23461888 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:47.602491+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:48.602644+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 23461888 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:49.602859+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 23461888 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313977 data_alloc: 218103808 data_used: 7114752
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97f4000/0x0/0x4ffc00000, data 0x1592496/0x1658000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:50.603080+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 23461888 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:51.603207+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 23461888 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:52.603396+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 23461888 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97f4000/0x0/0x4ffc00000, data 0x1592496/0x1658000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b096d7e000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09874b2c0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:53.603575+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 23461888 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09bade400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09bade400 session 0x55b09874a5a0
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09bade800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09bade800 session 0x55b0988cd860
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:54.603752+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 23453696 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09badf400
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314282 data_alloc: 218103808 data_used: 7118848
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09badf000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:55.603965+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 23429120 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97f4000/0x0/0x4ffc00000, data 0x1592496/0x1658000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:56.604092+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 19677184 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:57.604262+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:58.604471+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:59.604696+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97f4000/0x0/0x4ffc00000, data 0x1592496/0x1658000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1390566 data_alloc: 234881024 data_used: 18321408
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:00.604917+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:01.605138+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:02.605468+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:03.605673+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:04.605814+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1390566 data_alloc: 234881024 data_used: 18321408
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97f4000/0x0/0x4ffc00000, data 0x1592496/0x1658000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:05.605979+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.678905487s of 18.792392731s, submitted: 24
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:06.606208+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 10878976 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:07.606385+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132464640 unmapped: 12304384 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8c37000/0x0/0x4ffc00000, data 0x2149496/0x220f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:08.606582+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132464640 unmapped: 12304384 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:09.606889+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132464640 unmapped: 12304384 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497028 data_alloc: 234881024 data_used: 19230720
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8c37000/0x0/0x4ffc00000, data 0x2149496/0x220f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:10.607084+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 12271616 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:11.607261+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 12271616 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8c37000/0x0/0x4ffc00000, data 0x2149496/0x220f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [0,1,1])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:12.607586+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 12271616 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:13.607764+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 12271616 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:14.607980+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 12271616 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1496100 data_alloc: 234881024 data_used: 19238912
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:15.608182+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 12271616 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8c19000/0x0/0x4ffc00000, data 0x216d496/0x2233000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:16.608461+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 12271616 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09badf400 session 0x55b09905c000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.617673874s of 10.932350159s, submitted: 136
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09badf000 session 0x55b0994fde00
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b098fe4000
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:17.608630+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:18.608809+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:19.609028+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:20.609176+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:21.609447+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:22.609608+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:23.609752+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:24.609975+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:25.610231+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:26.610413+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:27.610575+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:28.610748+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:29.610958+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:30.611174+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:31.611380+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:32.611517+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:33.611680+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:34.611918+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:35.612128+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:36.612287+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:37.612420+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:38.612557+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:39.612765+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:40.612926+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:41.613114+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:42.613234+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:43.613487+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:44.613692+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:45.613921+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:46.614113+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:47.614280+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:48.614480+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:49.614747+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:50.614854+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:51.615053+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:52.615248+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:53.615406+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:54.615611+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:55.615762+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:56.615960+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:57.616156+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:58.616420+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:59.616660+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:00.616838+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:01.617003+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:02.617183+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:03.617398+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:04.617591+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:05.617845+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:06.618025+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:07.618262+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:08.618462+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:09.618751+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:10.618933+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:11.619163+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:12.619388+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:13.619549+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:14.619710+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:15.619892+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:16.620043+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:17.620212+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:18.620401+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:19.620569+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:20.620796+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:21.621010+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:22.621192+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:23.621366+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:24.621500+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:25.621643+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:26.621810+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:27.621949+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:28.622079+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: do_command 'config diff' '{prefix=config diff}'
Oct 10 10:23:01 compute-1 ceph-osd[76867]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 20815872 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: do_command 'config show' '{prefix=config show}'
Oct 10 10:23:01 compute-1 ceph-osd[76867]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 10 10:23:01 compute-1 ceph-osd[76867]: do_command 'counter dump' '{prefix=counter dump}'
Oct 10 10:23:01 compute-1 ceph-osd[76867]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 10 10:23:01 compute-1 ceph-osd[76867]: do_command 'counter schema' '{prefix=counter schema}'
Oct 10 10:23:01 compute-1 ceph-osd[76867]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:29.622225+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123781120 unmapped: 20987904 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:23:01 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:23:01 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:23:01 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:30.622343+0000)
Oct 10 10:23:01 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:23:01 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123166720 unmapped: 21602304 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:23:01 compute-1 ceph-osd[76867]: do_command 'log dump' '{prefix=log dump}'
Oct 10 10:23:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 10 10:23:02 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3971028922' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 10:23:02 compute-1 nova_compute[235132]: 2025-10-10 10:23:02.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:23:02 compute-1 nova_compute[235132]: 2025-10-10 10:23:02.044 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:23:02 compute-1 nova_compute[235132]: 2025-10-10 10:23:02.044 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:23:02 compute-1 nova_compute[235132]: 2025-10-10 10:23:02.070 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:23:02 compute-1 nova_compute[235132]: 2025-10-10 10:23:02.070 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:23:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct 10 10:23:02 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/661113299' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 10:23:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:23:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:23:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:02.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:23:02 compute-1 ceph-mon[79167]: from='client.26509 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:02 compute-1 ceph-mon[79167]: pgmap v1113: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:02 compute-1 ceph-mon[79167]: from='client.16953 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:02 compute-1 ceph-mon[79167]: from='client.26072 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:02 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1623996306' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 10:23:02 compute-1 ceph-mon[79167]: from='client.26530 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:02 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3971028922' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 10:23:02 compute-1 ceph-mon[79167]: from='client.16968 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:02 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3643359259' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 10:23:02 compute-1 ceph-mon[79167]: from='client.26093 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:02 compute-1 ceph-mon[79167]: from='client.26557 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:02 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1196706439' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 10 10:23:02 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/661113299' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 10:23:02 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2633175545' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 10:23:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Oct 10 10:23:02 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3336206771' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 10 10:23:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:23:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:23:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:23:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:23:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:03.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:03 compute-1 crontab[251198]: (root) LIST (root)
Oct 10 10:23:03 compute-1 nova_compute[235132]: 2025-10-10 10:23:03.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:03 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Oct 10 10:23:03 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3878076498' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 10 10:23:03 compute-1 ceph-mon[79167]: from='client.16983 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:03 compute-1 ceph-mon[79167]: from='client.26111 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:03 compute-1 ceph-mon[79167]: from='client.26572 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:03 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3336206771' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 10 10:23:03 compute-1 ceph-mon[79167]: from='client.26126 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:03 compute-1 ceph-mon[79167]: from='client.17001 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:03 compute-1 ceph-mon[79167]: from='client.26587 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:03 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/914417947' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 10 10:23:03 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/563292797' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 10 10:23:03 compute-1 ceph-mon[79167]: from='client.17022 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:03 compute-1 ceph-mon[79167]: from='client.17025 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:03 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/781763434' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 10 10:23:03 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/748618102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:23:04 compute-1 nova_compute[235132]: 2025-10-10 10:23:04.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:23:04 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Oct 10 10:23:04 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2189172015' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 10 10:23:04 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Oct 10 10:23:04 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2057841537' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 10 10:23:04 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Oct 10 10:23:04 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3927813275' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 10 10:23:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:04.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:04 compute-1 ceph-mon[79167]: from='client.26614 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:04 compute-1 ceph-mon[79167]: pgmap v1114: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:04 compute-1 ceph-mon[79167]: from='client.17052 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:04 compute-1 ceph-mon[79167]: from='client.26156 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:04 compute-1 ceph-mon[79167]: from='client.26638 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:04 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3878076498' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 10 10:23:04 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2622965518' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 10 10:23:04 compute-1 ceph-mon[79167]: from='client.17067 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:04 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/54031956' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 10 10:23:04 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2189172015' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 10 10:23:04 compute-1 ceph-mon[79167]: from='client.26165 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:04 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1788666203' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 10 10:23:04 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1918956957' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 10 10:23:04 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3442204986' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:23:04 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/743398584' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 10 10:23:04 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2057841537' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 10 10:23:04 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3927813275' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 10 10:23:04 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/983784123' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 10 10:23:05 compute-1 nova_compute[235132]: 2025-10-10 10:23:05.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:23:05 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Oct 10 10:23:05 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3007931880' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 10 10:23:05 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Oct 10 10:23:05 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1496720933' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:05 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Oct 10 10:23:05 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/166812341' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 10 10:23:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:05.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:05 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Oct 10 10:23:05 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2734253997' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 10 10:23:05 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Oct 10 10:23:05 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1178817591' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 10 10:23:05 compute-1 nova_compute[235132]: 2025-10-10 10:23:05.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:05 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Oct 10 10:23:05 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/574716574' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 10 10:23:05 compute-1 ceph-mon[79167]: from='client.17088 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:05 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1496940488' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:05 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/627566640' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 10 10:23:05 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1961890703' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 10 10:23:05 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3007931880' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 10 10:23:05 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1496720933' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:05 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/166812341' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 10 10:23:05 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/4102851143' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 10 10:23:05 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/3249578032' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 10 10:23:05 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2734253997' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 10 10:23:05 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1358533371' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:05 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1178817591' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 10 10:23:05 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1692194911' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 10 10:23:05 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3908035813' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 10 10:23:05 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/574716574' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 10 10:23:06 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Oct 10 10:23:06 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3456937404' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 10 10:23:06 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Oct 10 10:23:06 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/496033978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 10 10:23:06 compute-1 systemd[1]: Starting Hostname Service...
Oct 10 10:23:06 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Oct 10 10:23:06 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2553354890' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 10 10:23:06 compute-1 systemd[1]: Started Hostname Service.
Oct 10 10:23:06 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct 10 10:23:06 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3152055571' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 10 10:23:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:06.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:06 compute-1 ceph-mon[79167]: pgmap v1115: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:06 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1245277696' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 10 10:23:06 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/362286490' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 10 10:23:06 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3456937404' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 10 10:23:06 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/4270522901' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 10 10:23:06 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3276084272' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 10 10:23:06 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/496033978' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 10 10:23:06 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/3035571500' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 10 10:23:06 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/330407203' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 10 10:23:06 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1993800113' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 10 10:23:06 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2553354890' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 10 10:23:06 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/190481607' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 10 10:23:06 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3152055571' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 10 10:23:06 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/341481678' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 10 10:23:06 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/214331611' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 10 10:23:06 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Oct 10 10:23:06 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3212453945' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 10 10:23:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Oct 10 10:23:07 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1152939535' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 10 10:23:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:23:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:07.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:23:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Oct 10 10:23:07 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2749167785' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 10 10:23:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:23:07 compute-1 ceph-mon[79167]: from='client.26773 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:07 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2432505021' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 10 10:23:07 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3212453945' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 10 10:23:07 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1152939535' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 10 10:23:07 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2358813671' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 10 10:23:07 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1301091987' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 10 10:23:07 compute-1 ceph-mon[79167]: from='client.26794 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:07 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/77092020' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 10 10:23:07 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2749167785' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 10 10:23:07 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/85375489' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 10 10:23:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:23:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:23:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:23:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:23:08 compute-1 nova_compute[235132]: 2025-10-10 10:23:08.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:23:08 compute-1 nova_compute[235132]: 2025-10-10 10:23:08.066 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:23:08 compute-1 nova_compute[235132]: 2025-10-10 10:23:08.066 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:23:08 compute-1 nova_compute[235132]: 2025-10-10 10:23:08.066 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:23:08 compute-1 nova_compute[235132]: 2025-10-10 10:23:08.066 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:23:08 compute-1 nova_compute[235132]: 2025-10-10 10:23:08.067 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:23:08 compute-1 nova_compute[235132]: 2025-10-10 10:23:08.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:08 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:23:08 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1010273267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:23:08 compute-1 nova_compute[235132]: 2025-10-10 10:23:08.541 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:23:08 compute-1 nova_compute[235132]: 2025-10-10 10:23:08.700 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:23:08 compute-1 nova_compute[235132]: 2025-10-10 10:23:08.701 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4671MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:23:08 compute-1 nova_compute[235132]: 2025-10-10 10:23:08.702 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:23:08 compute-1 nova_compute[235132]: 2025-10-10 10:23:08.702 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:23:08 compute-1 nova_compute[235132]: 2025-10-10 10:23:08.786 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:23:08 compute-1 nova_compute[235132]: 2025-10-10 10:23:08.787 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:23:08 compute-1 nova_compute[235132]: 2025-10-10 10:23:08.811 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:23:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:08.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:08 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Oct 10 10:23:08 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3785517594' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 10 10:23:09 compute-1 ceph-mon[79167]: from='client.26303 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:09 compute-1 ceph-mon[79167]: pgmap v1116: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:09 compute-1 ceph-mon[79167]: from='client.26806 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:09 compute-1 ceph-mon[79167]: from='client.17247 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:09 compute-1 ceph-mon[79167]: from='client.26312 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:09 compute-1 ceph-mon[79167]: from='client.26318 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:09 compute-1 ceph-mon[79167]: from='client.26830 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:09 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/311730347' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 10 10:23:09 compute-1 ceph-mon[79167]: from='client.17265 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:09 compute-1 ceph-mon[79167]: from='client.17271 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:09 compute-1 ceph-mon[79167]: from='client.26327 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:09 compute-1 ceph-mon[79167]: from='client.26854 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:09 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1010273267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:23:09 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2112620568' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 10 10:23:09 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3785517594' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 10 10:23:09 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:23:09 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1415916768' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:23:09 compute-1 nova_compute[235132]: 2025-10-10 10:23:09.287 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:23:09 compute-1 nova_compute[235132]: 2025-10-10 10:23:09.292 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:23:09 compute-1 nova_compute[235132]: 2025-10-10 10:23:09.330 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:23:09 compute-1 nova_compute[235132]: 2025-10-10 10:23:09.333 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:23:09 compute-1 nova_compute[235132]: 2025-10-10 10:23:09.334 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:23:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:09.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:09 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0)
Oct 10 10:23:09 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/892024631' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 10 10:23:09 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Oct 10 10:23:09 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3580709824' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 10:23:10 compute-1 ceph-mon[79167]: from='client.17289 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:10 compute-1 ceph-mon[79167]: from='client.26351 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:10 compute-1 ceph-mon[79167]: from='client.26878 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:10 compute-1 ceph-mon[79167]: from='client.17295 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:10 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3805709205' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 10 10:23:10 compute-1 ceph-mon[79167]: from='client.26366 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:10 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1170104191' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 10:23:10 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1415916768' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:23:10 compute-1 ceph-mon[79167]: from='client.26905 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:10 compute-1 ceph-mon[79167]: from='client.17328 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:10 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/892024631' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 10 10:23:10 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/563491473' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 10 10:23:10 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/4079194258' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 10 10:23:10 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:23:10 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:23:10 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3580709824' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 10:23:10 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:23:10 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:23:10 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3297017605' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 10:23:10 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:23:10 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:23:10 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Oct 10 10:23:10 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/71675879' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 10 10:23:10 compute-1 nova_compute[235132]: 2025-10-10 10:23:10.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:23:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:10.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:23:10 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:23:10 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:23:10 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:23:10 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:23:10 compute-1 podman[252224]: 2025-10-10 10:23:10.999415146 +0000 UTC m=+0.084717677 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct 10 10:23:11 compute-1 podman[252223]: 2025-10-10 10:23:11.004163756 +0000 UTC m=+0.105734582 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 10 10:23:11 compute-1 podman[252229]: 2025-10-10 10:23:11.028134681 +0000 UTC m=+0.105707901 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 10 10:23:11 compute-1 ceph-mon[79167]: from='client.26384 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:11 compute-1 ceph-mon[79167]: pgmap v1117: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:11 compute-1 ceph-mon[79167]: from='client.26926 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:11 compute-1 ceph-mon[79167]: from='client.17346 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:11 compute-1 ceph-mon[79167]: from='client.26399 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:11 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:23:11 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:23:11 compute-1 ceph-mon[79167]: from='client.17373 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:11 compute-1 ceph-mon[79167]: from='client.26956 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:11 compute-1 ceph-mon[79167]: from='client.26405 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:11 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/71675879' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 10 10:23:11 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1792969292' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 10 10:23:11 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1651511276' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 10 10:23:11 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:23:11 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:23:11 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:23:11 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:23:11 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:23:11 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:23:11 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:23:11 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:23:11 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:23:11 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:23:11 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:23:11 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:23:11 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Oct 10 10:23:11 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3592385461' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 10 10:23:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:23:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:11.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:23:12 compute-1 ceph-mon[79167]: from='client.17397 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:12 compute-1 ceph-mon[79167]: from='client.26989 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:12 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/447989855' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 10 10:23:12 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/366803169' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 10 10:23:12 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3592385461' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 10 10:23:12 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/3216819042' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 10 10:23:12 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/804085901' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 10 10:23:12 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1227914892' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 10 10:23:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Oct 10 10:23:12 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2970845557' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 10 10:23:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:23:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df"} v 0)
Oct 10 10:23:12 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4049190113' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 10 10:23:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:12.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:23:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:23:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:23:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:23:13 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Oct 10 10:23:13 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2793981116' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 10 10:23:13 compute-1 ceph-mon[79167]: pgmap v1118: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:13 compute-1 ceph-mon[79167]: from='client.17463 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:13 compute-1 ceph-mon[79167]: from='client.26462 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:13 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2970845557' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 10 10:23:13 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1570390407' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 10 10:23:13 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3588161703' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 10 10:23:13 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/4049190113' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 10 10:23:13 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2769252938' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 10 10:23:13 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2793981116' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 10 10:23:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:13.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:13 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Oct 10 10:23:13 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4223693866' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 10 10:23:13 compute-1 nova_compute[235132]: 2025-10-10 10:23:13.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:14 compute-1 ceph-mon[79167]: from='client.27064 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:14 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/668363965' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 10 10:23:14 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/638418532' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 10 10:23:14 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/4223693866' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 10 10:23:14 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/3098976176' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 10 10:23:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:14.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:14 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Oct 10 10:23:14 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1333227027' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 10 10:23:15 compute-1 ceph-mon[79167]: pgmap v1119: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:15 compute-1 ceph-mon[79167]: from='client.17517 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:15 compute-1 ceph-mon[79167]: from='client.26501 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:15 compute-1 ceph-mon[79167]: from='client.27103 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:15 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2482729441' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 10 10:23:15 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1056008335' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 10 10:23:15 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1135114031' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 10 10:23:15 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2720008755' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 10 10:23:15 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1333227027' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 10 10:23:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:15.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:15 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Oct 10 10:23:15 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/693351407' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 10 10:23:15 compute-1 nova_compute[235132]: 2025-10-10 10:23:15.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:16 compute-1 ceph-mon[79167]: from='client.27127 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:16 compute-1 ceph-mon[79167]: from='client.17541 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:16 compute-1 ceph-mon[79167]: from='client.26522 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:16 compute-1 ceph-mon[79167]: from='client.27139 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:16 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/856513782' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 10 10:23:16 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/693351407' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 10 10:23:16 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/3141273853' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 10 10:23:16 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2028344608' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 10 10:23:16 compute-1 ovs-appctl[253402]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 10 10:23:16 compute-1 ovs-appctl[253407]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 10 10:23:16 compute-1 ovs-appctl[253413]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 10 10:23:16 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd dump"} v 0)
Oct 10 10:23:16 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1989748693' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 10 10:23:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:16.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Oct 10 10:23:17 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1097788986' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 10 10:23:17 compute-1 ceph-mon[79167]: pgmap v1120: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:17 compute-1 ceph-mon[79167]: from='client.17565 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:17 compute-1 ceph-mon[79167]: from='client.26540 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:17 compute-1 ceph-mon[79167]: from='client.17577 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:17 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:23:17 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/4129244223' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 10 10:23:17 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1989748693' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 10 10:23:17 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3917452810' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 10 10:23:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:17.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:23:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:23:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:23:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:23:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:23:18 compute-1 ceph-mon[79167]: from='client.27163 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:18 compute-1 ceph-mon[79167]: from='client.26546 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:18 compute-1 ceph-mon[79167]: from='client.17589 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:18 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1097788986' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 10 10:23:18 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/4187206927' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 10 10:23:18 compute-1 ceph-mon[79167]: from='client.17607 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:18 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/3498280741' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 10 10:23:18 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1781781234' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 10 10:23:18 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Oct 10 10:23:18 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1339899947' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 10 10:23:18 compute-1 nova_compute[235132]: 2025-10-10 10:23:18.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:18 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd stat"} v 0)
Oct 10 10:23:18 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/929438571' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 10 10:23:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:18.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:19 compute-1 ceph-mon[79167]: from='client.26570 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:19 compute-1 ceph-mon[79167]: pgmap v1121: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:19 compute-1 ceph-mon[79167]: from='client.17622 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:19 compute-1 ceph-mon[79167]: from='client.26579 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:19 compute-1 ceph-mon[79167]: from='client.27196 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:19 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1339899947' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 10 10:23:19 compute-1 ceph-mon[79167]: from='client.27208 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:19 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3595267682' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 10 10:23:19 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/929438571' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 10 10:23:19 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1213327011' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 10 10:23:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:19.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:19 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Oct 10 10:23:19 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1795034779' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 10 10:23:20 compute-1 ceph-mon[79167]: from='client.27217 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:20 compute-1 ceph-mon[79167]: from='client.26597 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:20 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/436430043' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 10 10:23:20 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2054736604' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:20 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1795034779' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 10 10:23:20 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/401893655' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 10 10:23:20 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Oct 10 10:23:20 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3929414897' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 10 10:23:20 compute-1 nova_compute[235132]: 2025-10-10 10:23:20.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:20.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:20 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Oct 10 10:23:20 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/151415482' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:21 compute-1 sudo[255004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:23:21 compute-1 sudo[255004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:23:21 compute-1 sudo[255004]: pam_unix(sudo:session): session closed for user root
Oct 10 10:23:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:21.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:21 compute-1 ceph-mon[79167]: from='client.17661 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:21 compute-1 ceph-mon[79167]: from='client.26603 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:23:21 compute-1 ceph-mon[79167]: pgmap v1122: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:21 compute-1 ceph-mon[79167]: from='client.27250 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:21 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3929414897' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 10 10:23:21 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1781797121' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 10 10:23:21 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/3032790570' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 10:23:21 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3539018218' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:21 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/151415482' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:21 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/3519844817' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 10 10:23:21 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Oct 10 10:23:21 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2054782978' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 10:23:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Oct 10 10:23:22 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/833543587' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 10 10:23:22 compute-1 ceph-mon[79167]: from='client.26633 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:22 compute-1 ceph-mon[79167]: from='client.17697 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:22 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2014367048' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:22 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2054782978' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 10:23:22 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/928458827' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 10:23:22 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1759395512' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 10 10:23:22 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/833543587' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 10 10:23:22 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3180821717' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 10 10:23:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:23:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Oct 10 10:23:22 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2593878606' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 10:23:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:22.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 10:23:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:23:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:23:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:23:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:23:23 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Oct 10 10:23:23 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2474021729' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 10 10:23:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:23.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:23 compute-1 ceph-mon[79167]: pgmap v1123: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:23 compute-1 ceph-mon[79167]: from='client.27298 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:23 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2593878606' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:23 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/314814720' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 10 10:23:23 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/316001559' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:23 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2474021729' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 10 10:23:23 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1610891490' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 10 10:23:23 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/579627303' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:23 compute-1 nova_compute[235132]: 2025-10-10 10:23:23.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:23 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Oct 10 10:23:23 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1639796689' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 10 10:23:24 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Oct 10 10:23:24 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3281277725' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:24 compute-1 ceph-mon[79167]: from='client.26666 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:24 compute-1 ceph-mon[79167]: from='client.17736 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:24 compute-1 ceph-mon[79167]: from='client.27325 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:24 compute-1 ceph-mon[79167]: pgmap v1124: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:24 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1812088201' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 10 10:23:24 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/333192748' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 10 10:23:24 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1639796689' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 10 10:23:24 compute-1 ceph-mon[79167]: from='client.27340 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:24 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3281277725' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:24 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2129940614' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:24.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:25 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Oct 10 10:23:25 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3007322319' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 10 10:23:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:23:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:25.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:23:25 compute-1 ceph-mon[79167]: from='client.27349 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:25 compute-1 ceph-mon[79167]: from='client.17766 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:25 compute-1 ceph-mon[79167]: from='client.26687 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:25 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1944349469' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:25 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2018417856' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 10 10:23:25 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3007322319' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 10 10:23:25 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/3153766169' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 10 10:23:25 compute-1 nova_compute[235132]: 2025-10-10 10:23:25.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:26 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Oct 10 10:23:26 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2707461924' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:26 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 10 10:23:26 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1702982701' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:23:26 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 10 10:23:26 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1702982701' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:23:26 compute-1 ceph-mon[79167]: from='client.17793 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:26 compute-1 ceph-mon[79167]: pgmap v1125: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:26 compute-1 ceph-mon[79167]: from='client.26699 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:26 compute-1 ceph-mon[79167]: from='client.27382 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:26 compute-1 ceph-mon[79167]: from='client.17805 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:26 compute-1 ceph-mon[79167]: from='client.27391 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:26 compute-1 ceph-mon[79167]: from='client.26705 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:26 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3316168477' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:26 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/3944079894' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 10:23:26 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2707461924' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 10 10:23:26 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/1702982701' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:23:26 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/1702982701' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:23:26 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Oct 10 10:23:26 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4075409164' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 10 10:23:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:26.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:27 compute-1 podman[255470]: 2025-10-10 10:23:27.098199612 +0000 UTC m=+0.091492642 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 10 10:23:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:27.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:23:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/463257619' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 10 10:23:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/3285063725' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 10 10:23:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/4075409164' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 10 10:23:27 compute-1 ceph-mon[79167]: from='client.17856 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:27 compute-1 ceph-mon[79167]: from='client.27433 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:27 compute-1 ceph-mon[79167]: from='client.26738 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:27 compute-1 ceph-mon[79167]: from='client.17862 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:27 compute-1 virtqemud[234629]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 10 10:23:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Oct 10 10:23:27 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3733082418' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 10:23:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:23:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:23:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:23:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:23:28 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Oct 10 10:23:28 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1570035963' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 10 10:23:28 compute-1 nova_compute[235132]: 2025-10-10 10:23:28.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:28 compute-1 ceph-mon[79167]: from='client.27439 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:28 compute-1 ceph-mon[79167]: from='client.26744 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:28 compute-1 ceph-mon[79167]: pgmap v1126: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:28 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2394245946' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 10:23:28 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3733082418' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 10:23:28 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/742710764' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 10 10:23:28 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2702895973' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 10 10:23:28 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1570035963' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 10 10:23:28 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2080136583' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 10 10:23:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:28.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:29 compute-1 systemd[1]: Starting Time & Date Service...
Oct 10 10:23:29 compute-1 systemd[1]: Started Time & Date Service.
Oct 10 10:23:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:23:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:29.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:23:29 compute-1 ceph-mon[79167]: from='client.17901 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:29 compute-1 ceph-mon[79167]: from='client.27469 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:29 compute-1 ceph-mon[79167]: from='client.17913 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:29 compute-1 ceph-mon[79167]: from='client.26768 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:23:29 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1354894680' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 10 10:23:29 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct 10 10:23:29 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2650376232' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 10 10:23:30 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Oct 10 10:23:30 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/344713102' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 10 10:23:30 compute-1 ceph-mon[79167]: pgmap v1127: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:30 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2650376232' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 10 10:23:30 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1070543984' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 10 10:23:30 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/344713102' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 10 10:23:30 compute-1 nova_compute[235132]: 2025-10-10 10:23:30.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:23:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:30.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:23:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:31.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:31 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:23:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:23:32 compute-1 ceph-mon[79167]: pgmap v1128: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:32 compute-1 sudo[256071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:23:32 compute-1 sudo[256071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:23:32 compute-1 sudo[256071]: pam_unix(sudo:session): session closed for user root
Oct 10 10:23:32 compute-1 sudo[256096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:23:32 compute-1 sudo[256096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:23:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:32.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:23:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:23:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:23:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:23:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:33.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:33 compute-1 sudo[256096]: pam_unix(sudo:session): session closed for user root
Oct 10 10:23:33 compute-1 nova_compute[235132]: 2025-10-10 10:23:33.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:33 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:23:33 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:23:33 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:23:33 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:23:33 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:23:33 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:23:33 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:23:34 compute-1 ceph-mon[79167]: pgmap v1129: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:34.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:23:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:35.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:23:35 compute-1 nova_compute[235132]: 2025-10-10 10:23:35.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:36 compute-1 ceph-mon[79167]: pgmap v1130: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:23:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:36.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:37.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:23:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:23:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:23:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:23:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:23:38 compute-1 sudo[256155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:23:38 compute-1 sudo[256155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:23:38 compute-1 sudo[256155]: pam_unix(sudo:session): session closed for user root
Oct 10 10:23:38 compute-1 nova_compute[235132]: 2025-10-10 10:23:38.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:23:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:38.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:23:39 compute-1 ceph-mon[79167]: pgmap v1131: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:39 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:23:39 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:23:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:39.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:40 compute-1 nova_compute[235132]: 2025-10-10 10:23:40.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:40.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:41 compute-1 ceph-mon[79167]: pgmap v1132: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:23:41 compute-1 sudo[256181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:23:41 compute-1 sudo[256181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:23:41 compute-1 sudo[256181]: pam_unix(sudo:session): session closed for user root
Oct 10 10:23:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:41.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:41 compute-1 podman[256205]: 2025-10-10 10:23:41.463095863 +0000 UTC m=+0.067024423 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid)
Oct 10 10:23:41 compute-1 podman[256206]: 2025-10-10 10:23:41.481429394 +0000 UTC m=+0.079605727 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Oct 10 10:23:41 compute-1 podman[256207]: 2025-10-10 10:23:41.51750712 +0000 UTC m=+0.107002826 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Oct 10 10:23:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:23:42.223 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:23:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:23:42.224 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:23:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:23:42.224 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:23:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:23:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:23:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:42.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:23:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:23:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:23:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:23:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:23:43 compute-1 ceph-mon[79167]: pgmap v1133: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:23:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:43.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:43 compute-1 nova_compute[235132]: 2025-10-10 10:23:43.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:23:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:44.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:23:45 compute-1 ceph-mon[79167]: pgmap v1134: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:23:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:45.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:23:45 compute-1 nova_compute[235132]: 2025-10-10 10:23:45.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:46 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:23:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:46.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:47.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:47 compute-1 ceph-mon[79167]: pgmap v1135: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:23:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:23:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:23:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:23:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:23:48 compute-1 nova_compute[235132]: 2025-10-10 10:23:48.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:48 compute-1 ceph-mon[79167]: pgmap v1136: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:23:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:48.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:23:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:49.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:50 compute-1 ceph-mon[79167]: pgmap v1137: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:50 compute-1 nova_compute[235132]: 2025-10-10 10:23:50.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:23:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:50.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:23:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:51.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:23:52 compute-1 ceph-mon[79167]: pgmap v1138: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:52.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:23:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:23:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:23:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:23:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:23:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:53.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:23:53 compute-1 nova_compute[235132]: 2025-10-10 10:23:53.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:54 compute-1 ceph-mon[79167]: pgmap v1139: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:54.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:23:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:55.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:23:55 compute-1 nova_compute[235132]: 2025-10-10 10:23:55.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:56 compute-1 ceph-mon[79167]: pgmap v1140: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:23:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:56.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:57.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:23:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:23:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:23:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:23:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:23:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:23:58 compute-1 podman[256276]: 2025-10-10 10:23:58.008245693 +0000 UTC m=+0.083606357 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct 10 10:23:58 compute-1 nova_compute[235132]: 2025-10-10 10:23:58.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:23:58 compute-1 ceph-mon[79167]: pgmap v1141: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:23:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:23:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:23:58.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:23:59 compute-1 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 10 10:23:59 compute-1 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 10 10:23:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:23:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:23:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:23:59.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:24:00 compute-1 nova_compute[235132]: 2025-10-10 10:24:00.334 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:24:00 compute-1 nova_compute[235132]: 2025-10-10 10:24:00.334 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:24:00 compute-1 nova_compute[235132]: 2025-10-10 10:24:00.335 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:24:00 compute-1 nova_compute[235132]: 2025-10-10 10:24:00.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:00 compute-1 ceph-mon[79167]: pgmap v1142: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:00.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:01 compute-1 nova_compute[235132]: 2025-10-10 10:24:01.040 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:24:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:01.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:01 compute-1 sudo[256302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:24:01 compute-1 sudo[256302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:24:01 compute-1 sudo[256302]: pam_unix(sudo:session): session closed for user root
Oct 10 10:24:01 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:24:01 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/740909772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:24:02 compute-1 nova_compute[235132]: 2025-10-10 10:24:02.040 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:24:02 compute-1 nova_compute[235132]: 2025-10-10 10:24:02.059 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:24:02 compute-1 nova_compute[235132]: 2025-10-10 10:24:02.059 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:24:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:24:02 compute-1 ceph-mon[79167]: pgmap v1143: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:02 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/3076253319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:24:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:02.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:24:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:24:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:24:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:24:03 compute-1 nova_compute[235132]: 2025-10-10 10:24:03.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:24:03 compute-1 nova_compute[235132]: 2025-10-10 10:24:03.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:24:03 compute-1 nova_compute[235132]: 2025-10-10 10:24:03.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:24:03 compute-1 nova_compute[235132]: 2025-10-10 10:24:03.067 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:24:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.002000054s ======
Oct 10 10:24:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:03.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Oct 10 10:24:03 compute-1 nova_compute[235132]: 2025-10-10 10:24:03.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:04 compute-1 nova_compute[235132]: 2025-10-10 10:24:04.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:24:04 compute-1 ceph-mon[79167]: pgmap v1144: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:24:04 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1898300847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:24:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:24:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:04.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:24:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:24:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:05.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:24:05 compute-1 nova_compute[235132]: 2025-10-10 10:24:05.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:05 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2544635714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:24:06 compute-1 nova_compute[235132]: 2025-10-10 10:24:06.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:24:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:06.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:06 compute-1 ceph-mon[79167]: pgmap v1145: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:07.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:24:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:24:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:24:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:24:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:24:08 compute-1 nova_compute[235132]: 2025-10-10 10:24:08.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:24:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:08.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:24:08 compute-1 ceph-mon[79167]: pgmap v1146: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:24:09 compute-1 nova_compute[235132]: 2025-10-10 10:24:09.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:24:09 compute-1 nova_compute[235132]: 2025-10-10 10:24:09.070 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:24:09 compute-1 nova_compute[235132]: 2025-10-10 10:24:09.070 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:24:09 compute-1 nova_compute[235132]: 2025-10-10 10:24:09.071 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:24:09 compute-1 nova_compute[235132]: 2025-10-10 10:24:09.071 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:24:09 compute-1 nova_compute[235132]: 2025-10-10 10:24:09.071 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:24:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:09.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:09 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:24:09 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1433281411' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:24:09 compute-1 nova_compute[235132]: 2025-10-10 10:24:09.574 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:24:09 compute-1 nova_compute[235132]: 2025-10-10 10:24:09.773 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:24:09 compute-1 nova_compute[235132]: 2025-10-10 10:24:09.781 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4702MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:24:09 compute-1 nova_compute[235132]: 2025-10-10 10:24:09.781 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:24:09 compute-1 nova_compute[235132]: 2025-10-10 10:24:09.782 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:24:09 compute-1 nova_compute[235132]: 2025-10-10 10:24:09.864 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:24:09 compute-1 nova_compute[235132]: 2025-10-10 10:24:09.865 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:24:09 compute-1 nova_compute[235132]: 2025-10-10 10:24:09.881 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:24:09 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1433281411' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:24:10 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:24:10 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1672674889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:24:10 compute-1 nova_compute[235132]: 2025-10-10 10:24:10.343 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:24:10 compute-1 nova_compute[235132]: 2025-10-10 10:24:10.350 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:24:10 compute-1 nova_compute[235132]: 2025-10-10 10:24:10.369 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:24:10 compute-1 nova_compute[235132]: 2025-10-10 10:24:10.371 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:24:10 compute-1 nova_compute[235132]: 2025-10-10 10:24:10.371 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:24:10 compute-1 nova_compute[235132]: 2025-10-10 10:24:10.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:24:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:10.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:24:11 compute-1 ceph-mon[79167]: pgmap v1147: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:11 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1672674889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:24:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:24:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:11.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:24:11 compute-1 podman[256376]: 2025-10-10 10:24:11.571783517 +0000 UTC m=+0.072246725 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 10 10:24:11 compute-1 podman[256377]: 2025-10-10 10:24:11.581431781 +0000 UTC m=+0.071167627 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 10 10:24:11 compute-1 podman[256398]: 2025-10-10 10:24:11.65342983 +0000 UTC m=+0.105497085 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 10 10:24:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:24:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:12.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:24:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:24:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:24:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:24:13 compute-1 ceph-mon[79167]: pgmap v1148: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:13.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:13 compute-1 nova_compute[235132]: 2025-10-10 10:24:13.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:14.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:15 compute-1 ceph-mon[79167]: pgmap v1149: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:24:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:24:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:15.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:24:15 compute-1 nova_compute[235132]: 2025-10-10 10:24:15.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:24:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:16.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:24:17 compute-1 ceph-mon[79167]: pgmap v1150: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:17 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:24:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:17.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:24:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:24:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:24:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:24:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:24:18 compute-1 nova_compute[235132]: 2025-10-10 10:24:18.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:18.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:19 compute-1 ceph-mon[79167]: pgmap v1151: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:24:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:24:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:19.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:24:19 compute-1 sudo[249112]: pam_unix(sudo:session): session closed for user root
Oct 10 10:24:19 compute-1 sshd-session[249111]: Received disconnect from 192.168.122.10 port 54974:11: disconnected by user
Oct 10 10:24:19 compute-1 sshd-session[249111]: Disconnected from user zuul 192.168.122.10 port 54974
Oct 10 10:24:19 compute-1 sshd-session[249108]: pam_unix(sshd:session): session closed for user zuul
Oct 10 10:24:19 compute-1 systemd-logind[789]: Session 57 logged out. Waiting for processes to exit.
Oct 10 10:24:19 compute-1 systemd[1]: session-57.scope: Deactivated successfully.
Oct 10 10:24:19 compute-1 systemd[1]: session-57.scope: Consumed 2min 52.246s CPU time, 650.3M memory peak, read 184.1M from disk, written 128.2M to disk.
Oct 10 10:24:19 compute-1 systemd-logind[789]: Removed session 57.
Oct 10 10:24:19 compute-1 sshd-session[256443]: Accepted publickey for zuul from 192.168.122.10 port 47940 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 10:24:19 compute-1 systemd-logind[789]: New session 58 of user zuul.
Oct 10 10:24:19 compute-1 systemd[1]: Started Session 58 of User zuul.
Oct 10 10:24:19 compute-1 sshd-session[256443]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 10:24:19 compute-1 sudo[256447]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-1-2025-10-10-exqxjyx.tar.xz
Oct 10 10:24:19 compute-1 sudo[256447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:24:20 compute-1 sudo[256447]: pam_unix(sudo:session): session closed for user root
Oct 10 10:24:20 compute-1 sshd-session[256446]: Received disconnect from 192.168.122.10 port 47940:11: disconnected by user
Oct 10 10:24:20 compute-1 sshd-session[256446]: Disconnected from user zuul 192.168.122.10 port 47940
Oct 10 10:24:20 compute-1 sshd-session[256443]: pam_unix(sshd:session): session closed for user zuul
Oct 10 10:24:20 compute-1 systemd[1]: session-58.scope: Deactivated successfully.
Oct 10 10:24:20 compute-1 systemd-logind[789]: Session 58 logged out. Waiting for processes to exit.
Oct 10 10:24:20 compute-1 systemd-logind[789]: Removed session 58.
Oct 10 10:24:20 compute-1 sshd-session[256472]: Accepted publickey for zuul from 192.168.122.10 port 47942 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 10:24:20 compute-1 systemd-logind[789]: New session 59 of user zuul.
Oct 10 10:24:20 compute-1 systemd[1]: Started Session 59 of User zuul.
Oct 10 10:24:20 compute-1 sshd-session[256472]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 10:24:20 compute-1 sudo[256476]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Oct 10 10:24:20 compute-1 sudo[256476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:24:20 compute-1 sudo[256476]: pam_unix(sudo:session): session closed for user root
Oct 10 10:24:20 compute-1 sshd-session[256475]: Received disconnect from 192.168.122.10 port 47942:11: disconnected by user
Oct 10 10:24:20 compute-1 sshd-session[256475]: Disconnected from user zuul 192.168.122.10 port 47942
Oct 10 10:24:20 compute-1 sshd-session[256472]: pam_unix(sshd:session): session closed for user zuul
Oct 10 10:24:20 compute-1 systemd[1]: session-59.scope: Deactivated successfully.
Oct 10 10:24:20 compute-1 systemd-logind[789]: Session 59 logged out. Waiting for processes to exit.
Oct 10 10:24:20 compute-1 systemd-logind[789]: Removed session 59.
Oct 10 10:24:20 compute-1 nova_compute[235132]: 2025-10-10 10:24:20.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:20.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:21 compute-1 ceph-mon[79167]: pgmap v1152: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:24:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:21.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:24:21 compute-1 sudo[256502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:24:21 compute-1 sudo[256502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:24:21 compute-1 sudo[256502]: pam_unix(sudo:session): session closed for user root
Oct 10 10:24:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:24:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:22.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:24:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:24:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:24:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:24:23 compute-1 ceph-mon[79167]: pgmap v1153: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:23.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:23 compute-1 nova_compute[235132]: 2025-10-10 10:24:23.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:24.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:25 compute-1 ceph-mon[79167]: pgmap v1154: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:24:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:25.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:25 compute-1 nova_compute[235132]: 2025-10-10 10:24:25.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:24:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:26.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:24:27 compute-1 ceph-mon[79167]: pgmap v1155: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/2262894800' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:24:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/2262894800' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:24:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:27.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:24:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:24:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:24:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:24:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:24:28 compute-1 nova_compute[235132]: 2025-10-10 10:24:28.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:28.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:28 compute-1 podman[256530]: 2025-10-10 10:24:28.962510593 +0000 UTC m=+0.063785894 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:24:29 compute-1 ceph-mon[79167]: pgmap v1156: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:24:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:29.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:30 compute-1 nova_compute[235132]: 2025-10-10 10:24:30.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:24:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:30.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:24:31 compute-1 ceph-mon[79167]: pgmap v1157: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:31 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:24:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:31.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:24:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 10:24:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:32.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 10:24:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:24:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:24:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:24:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:24:33 compute-1 ceph-mon[79167]: pgmap v1158: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:24:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:33.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:24:33 compute-1 nova_compute[235132]: 2025-10-10 10:24:33.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:34 compute-1 ceph-mon[79167]: pgmap v1159: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:24:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:34.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:35.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:35 compute-1 nova_compute[235132]: 2025-10-10 10:24:35.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:36 compute-1 ceph-mon[79167]: pgmap v1160: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:36.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:37.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:24:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:24:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:24:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:24:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:24:38 compute-1 ceph-mon[79167]: pgmap v1161: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:24:38 compute-1 nova_compute[235132]: 2025-10-10 10:24:38.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:38 compute-1 sudo[256554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:24:38 compute-1 sudo[256554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:24:38 compute-1 sudo[256554]: pam_unix(sudo:session): session closed for user root
Oct 10 10:24:38 compute-1 sudo[256579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Oct 10 10:24:38 compute-1 sudo[256579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:24:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:24:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:38.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:24:39 compute-1 sudo[256579]: pam_unix(sudo:session): session closed for user root
Oct 10 10:24:39 compute-1 sudo[256625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:24:39 compute-1 sudo[256625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:24:39 compute-1 sudo[256625]: pam_unix(sudo:session): session closed for user root
Oct 10 10:24:39 compute-1 sudo[256650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:24:39 compute-1 sudo[256650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:24:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:39.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:39 compute-1 sudo[256650]: pam_unix(sudo:session): session closed for user root
Oct 10 10:24:40 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:24:40 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:24:40 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:24:40 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:24:40 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:24:40 compute-1 nova_compute[235132]: 2025-10-10 10:24:40.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:40.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:41 compute-1 ceph-mon[79167]: pgmap v1162: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:41 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:24:41 compute-1 ceph-mon[79167]: pgmap v1163: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 10 10:24:41 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:24:41 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:24:41 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:24:41 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:24:41 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:24:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:41.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:41 compute-1 sudo[256708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:24:41 compute-1 sudo[256708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:24:41 compute-1 sudo[256708]: pam_unix(sudo:session): session closed for user root
Oct 10 10:24:41 compute-1 podman[256733]: 2025-10-10 10:24:41.785617357 +0000 UTC m=+0.065176582 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct 10 10:24:41 compute-1 podman[256732]: 2025-10-10 10:24:41.814314712 +0000 UTC m=+0.092499260 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:24:41 compute-1 podman[256734]: 2025-10-10 10:24:41.814287491 +0000 UTC m=+0.091461402 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:24:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:24:42.223 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:24:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:24:42.224 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:24:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:24:42.224 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:24:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:24:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:42.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:24:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:24:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:24:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:24:43 compute-1 ceph-mon[79167]: pgmap v1164: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 10 10:24:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:43.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:43 compute-1 nova_compute[235132]: 2025-10-10 10:24:43.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:44 compute-1 sudo[256799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:24:44 compute-1 sudo[256799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:24:44 compute-1 sudo[256799]: pam_unix(sudo:session): session closed for user root
Oct 10 10:24:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:44.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:45 compute-1 ceph-mon[79167]: pgmap v1165: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 10 10:24:45 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:24:45 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:24:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:45.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:45 compute-1 nova_compute[235132]: 2025-10-10 10:24:45.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:24:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:46.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:24:47 compute-1 ceph-mon[79167]: pgmap v1166: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 10 10:24:47 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:24:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:47.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:24:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:24:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:24:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:24:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:24:48 compute-1 nova_compute[235132]: 2025-10-10 10:24:48.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:24:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:48.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:24:49 compute-1 ceph-mon[79167]: pgmap v1167: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 10 10:24:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:49.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:50 compute-1 nova_compute[235132]: 2025-10-10 10:24:50.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:50.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:51 compute-1 ceph-mon[79167]: pgmap v1168: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 10 10:24:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:24:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:51.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:24:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:24:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:52.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:24:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:24:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:24:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:24:53 compute-1 ceph-mon[79167]: pgmap v1169: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:53.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:53 compute-1 nova_compute[235132]: 2025-10-10 10:24:53.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #67. Immutable memtables: 0.
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:24:54.201685) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 67
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091894201791, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 2424, "num_deletes": 508, "total_data_size": 5032071, "memory_usage": 5106448, "flush_reason": "Manual Compaction"}
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #68: started
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091894220148, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 68, "file_size": 3255631, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34335, "largest_seqno": 36754, "table_properties": {"data_size": 3245062, "index_size": 5975, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3525, "raw_key_size": 29579, "raw_average_key_size": 21, "raw_value_size": 3220417, "raw_average_value_size": 2305, "num_data_blocks": 256, "num_entries": 1397, "num_filter_entries": 1397, "num_deletions": 508, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760091750, "oldest_key_time": 1760091750, "file_creation_time": 1760091894, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 18537 microseconds, and 8825 cpu microseconds.
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:24:54.220228) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #68: 3255631 bytes OK
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:24:54.220268) [db/memtable_list.cc:519] [default] Level-0 commit table #68 started
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:24:54.222314) [db/memtable_list.cc:722] [default] Level-0 commit table #68: memtable #1 done
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:24:54.222375) EVENT_LOG_v1 {"time_micros": 1760091894222366, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:24:54.222401) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 5019287, prev total WAL file size 5019287, number of live WAL files 2.
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000064.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:24:54.224474) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [68(3179KB)], [66(13MB)]
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091894224574, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [68], "files_L6": [66], "score": -1, "input_data_size": 17070640, "oldest_snapshot_seqno": -1}
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #69: 6498 keys, 14859050 bytes, temperature: kUnknown
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091894310842, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 69, "file_size": 14859050, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14815102, "index_size": 26622, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16261, "raw_key_size": 170822, "raw_average_key_size": 26, "raw_value_size": 14697358, "raw_average_value_size": 2261, "num_data_blocks": 1053, "num_entries": 6498, "num_filter_entries": 6498, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760091894, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 69, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:24:54.311126) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 14859050 bytes
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:24:54.312552) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 197.7 rd, 172.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 13.2 +0.0 blob) out(14.2 +0.0 blob), read-write-amplify(9.8) write-amplify(4.6) OK, records in: 7531, records dropped: 1033 output_compression: NoCompression
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:24:54.312570) EVENT_LOG_v1 {"time_micros": 1760091894312561, "job": 40, "event": "compaction_finished", "compaction_time_micros": 86360, "compaction_time_cpu_micros": 63715, "output_level": 6, "num_output_files": 1, "total_output_size": 14859050, "num_input_records": 7531, "num_output_records": 6498, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091894313433, "job": 40, "event": "table_file_deletion", "file_number": 68}
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000066.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091894316563, "job": 40, "event": "table_file_deletion", "file_number": 66}
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:24:54.224263) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:24:54.316716) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:24:54.316725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:24:54.316728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:24:54.316730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:24:54 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:24:54.316733) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:24:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:24:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:54.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:24:55 compute-1 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 10:24:55 compute-1 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 13K writes, 48K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 13K writes, 4030 syncs, 3.37 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2312 writes, 7308 keys, 2312 commit groups, 1.0 writes per commit group, ingest: 7.72 MB, 0.01 MB/s
                                           Interval WAL: 2312 writes, 996 syncs, 2.32 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 10 10:24:55 compute-1 ceph-mon[79167]: pgmap v1170: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:24:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:55.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:55 compute-1 nova_compute[235132]: 2025-10-10 10:24:55.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:24:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:56.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:24:57 compute-1 ceph-mon[79167]: pgmap v1171: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:24:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:57.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:24:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:24:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:24:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:24:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:24:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:24:58 compute-1 nova_compute[235132]: 2025-10-10 10:24:58.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:24:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:24:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:24:58.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:24:59 compute-1 ceph-mon[79167]: pgmap v1172: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:24:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:24:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:24:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:24:59.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:24:59 compute-1 podman[256832]: 2025-10-10 10:24:59.99499259 +0000 UTC m=+0.090199157 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 10 10:25:00 compute-1 nova_compute[235132]: 2025-10-10 10:25:00.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:25:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:00.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:25:01 compute-1 ceph-mon[79167]: pgmap v1173: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:01 compute-1 nova_compute[235132]: 2025-10-10 10:25:01.371 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:25:01 compute-1 nova_compute[235132]: 2025-10-10 10:25:01.372 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:25:01 compute-1 nova_compute[235132]: 2025-10-10 10:25:01.372 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:25:01 compute-1 nova_compute[235132]: 2025-10-10 10:25:01.372 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:25:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:25:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:01.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:25:01 compute-1 sudo[256854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:25:01 compute-1 sudo[256854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:25:01 compute-1 sudo[256854]: pam_unix(sudo:session): session closed for user root
Oct 10 10:25:02 compute-1 nova_compute[235132]: 2025-10-10 10:25:02.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:25:02 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/3269636147' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:25:02 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:25:02 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1111097345' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:25:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:25:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:25:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:02.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:25:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:25:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:25:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:25:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:25:03 compute-1 nova_compute[235132]: 2025-10-10 10:25:03.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:25:03 compute-1 ceph-mon[79167]: pgmap v1174: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:03.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:03 compute-1 nova_compute[235132]: 2025-10-10 10:25:03.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:04 compute-1 nova_compute[235132]: 2025-10-10 10:25:04.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:25:04 compute-1 nova_compute[235132]: 2025-10-10 10:25:04.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:25:04 compute-1 nova_compute[235132]: 2025-10-10 10:25:04.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:25:04 compute-1 nova_compute[235132]: 2025-10-10 10:25:04.067 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:25:04 compute-1 nova_compute[235132]: 2025-10-10 10:25:04.067 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:25:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:25:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:04.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:25:05 compute-1 ceph-mon[79167]: pgmap v1175: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:05.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:05 compute-1 nova_compute[235132]: 2025-10-10 10:25:05.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:06 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1184273382' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:25:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:25:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:06.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:25:07 compute-1 ceph-mon[79167]: pgmap v1176: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:07 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3882845651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:25:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:25:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:07.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:25:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:25:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:25:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:25:08 compute-1 nova_compute[235132]: 2025-10-10 10:25:08.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:25:08 compute-1 nova_compute[235132]: 2025-10-10 10:25:08.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:25:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:08.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:25:09 compute-1 ceph-mon[79167]: pgmap v1177: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:09.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:10 compute-1 nova_compute[235132]: 2025-10-10 10:25:10.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:11.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:11 compute-1 nova_compute[235132]: 2025-10-10 10:25:11.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:25:11 compute-1 nova_compute[235132]: 2025-10-10 10:25:11.076 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:25:11 compute-1 nova_compute[235132]: 2025-10-10 10:25:11.076 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:25:11 compute-1 nova_compute[235132]: 2025-10-10 10:25:11.076 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:25:11 compute-1 nova_compute[235132]: 2025-10-10 10:25:11.077 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:25:11 compute-1 nova_compute[235132]: 2025-10-10 10:25:11.077 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:25:11 compute-1 ceph-mon[79167]: pgmap v1178: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:11 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:25:11 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3659408900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:25:11 compute-1 nova_compute[235132]: 2025-10-10 10:25:11.540 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:25:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:11.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:11 compute-1 nova_compute[235132]: 2025-10-10 10:25:11.740 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:25:11 compute-1 nova_compute[235132]: 2025-10-10 10:25:11.742 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4826MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:25:11 compute-1 nova_compute[235132]: 2025-10-10 10:25:11.742 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:25:11 compute-1 nova_compute[235132]: 2025-10-10 10:25:11.742 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:25:11 compute-1 nova_compute[235132]: 2025-10-10 10:25:11.845 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:25:11 compute-1 nova_compute[235132]: 2025-10-10 10:25:11.846 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:25:11 compute-1 nova_compute[235132]: 2025-10-10 10:25:11.909 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:25:11 compute-1 podman[256906]: 2025-10-10 10:25:11.972756686 +0000 UTC m=+0.075923057 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, container_name=iscsid)
Oct 10 10:25:11 compute-1 podman[256907]: 2025-10-10 10:25:11.996489354 +0000 UTC m=+0.085205620 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 10 10:25:12 compute-1 podman[256908]: 2025-10-10 10:25:12.03983381 +0000 UTC m=+0.132382871 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Oct 10 10:25:12 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3659408900' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:25:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:25:12 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1732526091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:25:12 compute-1 nova_compute[235132]: 2025-10-10 10:25:12.400 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:25:12 compute-1 nova_compute[235132]: 2025-10-10 10:25:12.409 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:25:12 compute-1 nova_compute[235132]: 2025-10-10 10:25:12.431 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:25:12 compute-1 nova_compute[235132]: 2025-10-10 10:25:12.434 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:25:12 compute-1 nova_compute[235132]: 2025-10-10 10:25:12.434 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:25:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:25:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:25:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:25:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:25:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:25:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:13.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:13 compute-1 ceph-mon[79167]: pgmap v1179: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:13 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1732526091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:25:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:13.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:13 compute-1 nova_compute[235132]: 2025-10-10 10:25:13.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:15.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:15 compute-1 ceph-mon[79167]: pgmap v1180: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:15.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:15 compute-1 nova_compute[235132]: 2025-10-10 10:25:15.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:25:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:17.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:17 compute-1 ceph-mon[79167]: pgmap v1181: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:25:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:17.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:25:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:25:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:25:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:25:18 compute-1 nova_compute[235132]: 2025-10-10 10:25:18.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:19.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:19 compute-1 ceph-mon[79167]: pgmap v1182: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:19.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:20 compute-1 nova_compute[235132]: 2025-10-10 10:25:20.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:21.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:21 compute-1 ceph-mon[79167]: pgmap v1183: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:21.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:21 compute-1 sudo[256994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:25:21 compute-1 sudo[256994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:25:21 compute-1 sudo[256994]: pam_unix(sudo:session): session closed for user root
Oct 10 10:25:22 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 10:25:22 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 6935 writes, 37K keys, 6935 commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.04 MB/s
                                           Cumulative WAL: 6935 writes, 6935 syncs, 1.00 writes per sync, written: 0.09 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1550 writes, 8346 keys, 1550 commit groups, 1.0 writes per commit group, ingest: 17.91 MB, 0.03 MB/s
                                           Interval WAL: 1551 writes, 1551 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    144.7      0.38              0.20        20    0.019       0      0       0.0       0.0
                                             L6      1/0   14.17 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.5    207.4    178.1      1.38              0.83        19    0.072    107K    10K       0.0       0.0
                                            Sum      1/0   14.17 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.5    162.8    170.9      1.75              1.03        39    0.045    107K    10K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.0    163.0    164.0      0.49              0.32        10    0.049     34K   3591       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    207.4    178.1      1.38              0.83        19    0.072    107K    10K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    145.6      0.37              0.20        19    0.020       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.053, interval 0.011
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.29 GB write, 0.12 MB/s write, 0.28 GB read, 0.12 MB/s read, 1.8 seconds
                                           Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5625d3e63350#2 capacity: 304.00 MB usage: 26.82 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000267 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1611,26.00 MB,8.55392%) FilterBlock(39,311.17 KB,0.0999601%) IndexBlock(39,528.27 KB,0.169699%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Oct 10 10:25:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:25:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:25:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:25:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:25:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:25:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:23.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:23 compute-1 ceph-mon[79167]: pgmap v1184: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:23 compute-1 nova_compute[235132]: 2025-10-10 10:25:23.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:23.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:25.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:25 compute-1 ceph-mon[79167]: pgmap v1185: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:25.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:25 compute-1 nova_compute[235132]: 2025-10-10 10:25:25.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:26 compute-1 ceph-mon[79167]: pgmap v1186: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:25:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:27.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:25:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:25:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:27.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:25:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:25:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:25:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:25:28 compute-1 nova_compute[235132]: 2025-10-10 10:25:28.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:25:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:29.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:25:29 compute-1 ceph-mon[79167]: pgmap v1187: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:29.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:30 compute-1 nova_compute[235132]: 2025-10-10 10:25:30.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:31 compute-1 podman[257023]: 2025-10-10 10:25:31.013023094 +0000 UTC m=+0.119217819 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 10 10:25:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:31.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:31 compute-1 ceph-mon[79167]: pgmap v1188: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:31.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:32 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:25:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:25:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:25:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:25:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:25:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:25:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:33.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:33 compute-1 ceph-mon[79167]: pgmap v1189: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:33 compute-1 nova_compute[235132]: 2025-10-10 10:25:33.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:25:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:33.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:25:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:25:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:35.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:25:35 compute-1 ceph-mon[79167]: pgmap v1190: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:25:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:35.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:25:35 compute-1 nova_compute[235132]: 2025-10-10 10:25:35.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:25:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:37.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:25:37 compute-1 ceph-mon[79167]: pgmap v1191: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:25:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:37.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:25:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:25:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:25:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:25:38 compute-1 nova_compute[235132]: 2025-10-10 10:25:38.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:25:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:39.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:25:39 compute-1 ceph-mon[79167]: pgmap v1192: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:25:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:39.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:25:40 compute-1 nova_compute[235132]: 2025-10-10 10:25:40.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:41.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:41 compute-1 ceph-mon[79167]: pgmap v1193: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:41.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:42 compute-1 sudo[257045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:25:42 compute-1 sudo[257045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:25:42 compute-1 sudo[257045]: pam_unix(sudo:session): session closed for user root
Oct 10 10:25:42 compute-1 podman[257069]: 2025-10-10 10:25:42.139499158 +0000 UTC m=+0.096554590 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 10 10:25:42 compute-1 podman[257070]: 2025-10-10 10:25:42.150429867 +0000 UTC m=+0.100950341 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 10 10:25:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:25:42.225 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:25:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:25:42.225 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:25:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:25:42.225 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:25:42 compute-1 podman[257110]: 2025-10-10 10:25:42.273226983 +0000 UTC m=+0.106225614 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 10 10:25:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:25:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:25:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:25:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:25:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:25:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:43.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:43 compute-1 ceph-mon[79167]: pgmap v1194: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:25:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:43.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:43 compute-1 nova_compute[235132]: 2025-10-10 10:25:43.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:44 compute-1 sudo[257138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:25:44 compute-1 sudo[257138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:25:44 compute-1 sudo[257138]: pam_unix(sudo:session): session closed for user root
Oct 10 10:25:44 compute-1 sudo[257163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:25:44 compute-1 sudo[257163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:25:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:45.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:45 compute-1 ceph-mon[79167]: pgmap v1195: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:45 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:25:45 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:25:45 compute-1 sudo[257163]: pam_unix(sudo:session): session closed for user root
Oct 10 10:25:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:25:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:45.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:25:45 compute-1 nova_compute[235132]: 2025-10-10 10:25:45.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:46 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:25:46 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:25:46 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:25:46 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:25:46 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:25:46 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:25:46 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:25:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:25:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:47.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:25:47 compute-1 ceph-mon[79167]: pgmap v1196: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:25:47 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:25:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:25:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:25:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:47.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:25:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:25:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:25:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:25:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:25:48 compute-1 nova_compute[235132]: 2025-10-10 10:25:48.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:49.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:49 compute-1 ceph-mon[79167]: pgmap v1197: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:49.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:50 compute-1 sudo[257224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:25:50 compute-1 sudo[257224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:25:50 compute-1 sudo[257224]: pam_unix(sudo:session): session closed for user root
Oct 10 10:25:50 compute-1 nova_compute[235132]: 2025-10-10 10:25:50.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:51.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:51 compute-1 ceph-mon[79167]: pgmap v1198: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:25:51 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:25:51 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:25:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:51.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:25:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:25:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:25:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:25:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:25:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:53.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:53 compute-1 ceph-mon[79167]: pgmap v1199: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:25:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:53.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:53 compute-1 nova_compute[235132]: 2025-10-10 10:25:53.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:25:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:55.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:25:55 compute-1 ceph-mon[79167]: pgmap v1200: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:25:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:55.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:25:55 compute-1 nova_compute[235132]: 2025-10-10 10:25:55.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:57.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:57 compute-1 ceph-mon[79167]: pgmap v1201: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:25:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:25:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:25:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:57.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:25:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:25:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:25:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:25:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:25:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:25:58 compute-1 nova_compute[235132]: 2025-10-10 10:25:58.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:25:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:25:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:25:59.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:25:59 compute-1 ceph-mon[79167]: pgmap v1202: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:25:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:25:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:25:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:25:59.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:26:00 compute-1 ceph-mon[79167]: pgmap v1203: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:00 compute-1 nova_compute[235132]: 2025-10-10 10:26:00.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:26:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:01.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:26:01 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:26:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:26:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:01.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:26:02 compute-1 podman[257255]: 2025-10-10 10:26:02.001034766 +0000 UTC m=+0.088190332 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 10 10:26:02 compute-1 sudo[257274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:26:02 compute-1 sudo[257274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:26:02 compute-1 sudo[257274]: pam_unix(sudo:session): session closed for user root
Oct 10 10:26:02 compute-1 nova_compute[235132]: 2025-10-10 10:26:02.436 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:26:02 compute-1 nova_compute[235132]: 2025-10-10 10:26:02.437 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:26:02 compute-1 nova_compute[235132]: 2025-10-10 10:26:02.437 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:26:02 compute-1 nova_compute[235132]: 2025-10-10 10:26:02.438 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:26:02 compute-1 ceph-mon[79167]: pgmap v1204: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:26:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:26:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:26:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:26:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:26:03 compute-1 nova_compute[235132]: 2025-10-10 10:26:03.040 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:26:03 compute-1 nova_compute[235132]: 2025-10-10 10:26:03.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:26:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:26:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:03.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:26:03 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/212724700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:26:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:26:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:03.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:26:03 compute-1 nova_compute[235132]: 2025-10-10 10:26:03.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:04 compute-1 ceph-mon[79167]: pgmap v1205: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:26:04 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/470207897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:26:05 compute-1 nova_compute[235132]: 2025-10-10 10:26:05.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:26:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:05.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:26:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:05.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:26:05 compute-1 nova_compute[235132]: 2025-10-10 10:26:05.986 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:06 compute-1 nova_compute[235132]: 2025-10-10 10:26:06.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:26:06 compute-1 nova_compute[235132]: 2025-10-10 10:26:06.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:26:06 compute-1 nova_compute[235132]: 2025-10-10 10:26:06.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:26:06 compute-1 nova_compute[235132]: 2025-10-10 10:26:06.064 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:26:06 compute-1 ceph-mon[79167]: pgmap v1206: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:07 compute-1 nova_compute[235132]: 2025-10-10 10:26:07.060 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:26:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:07.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:26:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:07.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:07 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3647399066' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:26:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:26:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:26:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:26:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:26:08 compute-1 nova_compute[235132]: 2025-10-10 10:26:08.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:08 compute-1 ceph-mon[79167]: pgmap v1207: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:26:08 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1949448778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:26:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:26:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:09.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:26:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:09.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:10 compute-1 nova_compute[235132]: 2025-10-10 10:26:10.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:26:10 compute-1 nova_compute[235132]: 2025-10-10 10:26:10.990 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:11 compute-1 ceph-mon[79167]: pgmap v1208: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:11.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:11.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:12 compute-1 nova_compute[235132]: 2025-10-10 10:26:12.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:26:12 compute-1 nova_compute[235132]: 2025-10-10 10:26:12.069 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:26:12 compute-1 nova_compute[235132]: 2025-10-10 10:26:12.070 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:26:12 compute-1 nova_compute[235132]: 2025-10-10 10:26:12.070 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:26:12 compute-1 nova_compute[235132]: 2025-10-10 10:26:12.070 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:26:12 compute-1 nova_compute[235132]: 2025-10-10 10:26:12.070 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:26:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:26:12 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/943224587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:26:12 compute-1 nova_compute[235132]: 2025-10-10 10:26:12.530 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:26:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:26:12 compute-1 nova_compute[235132]: 2025-10-10 10:26:12.808 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:26:12 compute-1 nova_compute[235132]: 2025-10-10 10:26:12.811 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4859MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:26:12 compute-1 nova_compute[235132]: 2025-10-10 10:26:12.812 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:26:12 compute-1 nova_compute[235132]: 2025-10-10 10:26:12.812 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:26:12 compute-1 podman[257326]: 2025-10-10 10:26:12.962815069 +0000 UTC m=+0.064810682 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 10 10:26:12 compute-1 podman[257327]: 2025-10-10 10:26:12.989495578 +0000 UTC m=+0.080372747 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 10 10:26:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:26:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:26:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:26:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:26:13 compute-1 podman[257328]: 2025-10-10 10:26:13.00670995 +0000 UTC m=+0.101005993 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 10 10:26:13 compute-1 ceph-mon[79167]: pgmap v1209: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:13 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/943224587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:26:13 compute-1 nova_compute[235132]: 2025-10-10 10:26:13.047 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:26:13 compute-1 nova_compute[235132]: 2025-10-10 10:26:13.047 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:26:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:13.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:13 compute-1 nova_compute[235132]: 2025-10-10 10:26:13.153 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Refreshing inventories for resource provider c9b2c4a3-cb19-4387-8719-36027e3cdaec _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 10 10:26:13 compute-1 nova_compute[235132]: 2025-10-10 10:26:13.256 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Updating ProviderTree inventory for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 10 10:26:13 compute-1 nova_compute[235132]: 2025-10-10 10:26:13.256 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Updating inventory in ProviderTree for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 10 10:26:13 compute-1 nova_compute[235132]: 2025-10-10 10:26:13.294 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Refreshing aggregate associations for resource provider c9b2c4a3-cb19-4387-8719-36027e3cdaec, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 10 10:26:13 compute-1 nova_compute[235132]: 2025-10-10 10:26:13.328 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Refreshing trait associations for resource provider c9b2c4a3-cb19-4387-8719-36027e3cdaec, traits: HW_CPU_X86_AESNI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,HW_CPU_X86_AVX,HW_CPU_X86_FMA3,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE42,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_CLMUL,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSSE3,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE41,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 10 10:26:13 compute-1 nova_compute[235132]: 2025-10-10 10:26:13.348 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:26:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:26:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:13.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:26:13 compute-1 nova_compute[235132]: 2025-10-10 10:26:13.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:13 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:26:13 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2125983211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:26:13 compute-1 nova_compute[235132]: 2025-10-10 10:26:13.811 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:26:13 compute-1 nova_compute[235132]: 2025-10-10 10:26:13.820 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:26:13 compute-1 nova_compute[235132]: 2025-10-10 10:26:13.840 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:26:13 compute-1 nova_compute[235132]: 2025-10-10 10:26:13.844 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:26:13 compute-1 nova_compute[235132]: 2025-10-10 10:26:13.845 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:26:13 compute-1 nova_compute[235132]: 2025-10-10 10:26:13.846 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:26:13 compute-1 nova_compute[235132]: 2025-10-10 10:26:13.847 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 10 10:26:13 compute-1 nova_compute[235132]: 2025-10-10 10:26:13.870 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 10 10:26:13 compute-1 nova_compute[235132]: 2025-10-10 10:26:13.871 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:26:13 compute-1 nova_compute[235132]: 2025-10-10 10:26:13.872 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 10 10:26:13 compute-1 nova_compute[235132]: 2025-10-10 10:26:13.889 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:26:14 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2125983211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:26:15 compute-1 ceph-mon[79167]: pgmap v1210: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:26:15 compute-1 nova_compute[235132]: 2025-10-10 10:26:15.062 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:26:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 10:26:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:15.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 10:26:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:15.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:15 compute-1 nova_compute[235132]: 2025-10-10 10:26:15.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:17 compute-1 ceph-mon[79167]: pgmap v1211: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:17 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:26:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:17.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:26:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:17.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:26:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:26:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:26:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:26:18 compute-1 nova_compute[235132]: 2025-10-10 10:26:18.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:19 compute-1 ceph-mon[79167]: pgmap v1212: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:26:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:26:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:19.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:26:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:19.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:20 compute-1 nova_compute[235132]: 2025-10-10 10:26:20.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:21 compute-1 ceph-mon[79167]: pgmap v1213: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:26:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:21.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:26:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:26:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:21.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:26:22 compute-1 sudo[257420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:26:22 compute-1 sudo[257420]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:26:22 compute-1 sudo[257420]: pam_unix(sudo:session): session closed for user root
Oct 10 10:26:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:26:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:26:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:26:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:26:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:26:23 compute-1 ceph-mon[79167]: pgmap v1214: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:23.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:23.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:23 compute-1 nova_compute[235132]: 2025-10-10 10:26:23.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #70. Immutable memtables: 0.
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:26:24.592944) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 70
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091984592971, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1135, "num_deletes": 251, "total_data_size": 2602490, "memory_usage": 2650792, "flush_reason": "Manual Compaction"}
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #71: started
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091984601618, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 71, "file_size": 1080048, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36759, "largest_seqno": 37889, "table_properties": {"data_size": 1076005, "index_size": 1631, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10781, "raw_average_key_size": 20, "raw_value_size": 1067267, "raw_average_value_size": 2068, "num_data_blocks": 70, "num_entries": 516, "num_filter_entries": 516, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760091895, "oldest_key_time": 1760091895, "file_creation_time": 1760091984, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 8726 microseconds, and 3588 cpu microseconds.
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:26:24.601665) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #71: 1080048 bytes OK
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:26:24.601687) [db/memtable_list.cc:519] [default] Level-0 commit table #71 started
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:26:24.603422) [db/memtable_list.cc:722] [default] Level-0 commit table #71: memtable #1 done
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:26:24.603447) EVENT_LOG_v1 {"time_micros": 1760091984603439, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:26:24.603469) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 2596970, prev total WAL file size 2596970, number of live WAL files 2.
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000067.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:26:24.605085) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303033' seq:72057594037927935, type:22 .. '6D6772737461740031323535' seq:0, type:0; will stop at (end)
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [71(1054KB)], [69(14MB)]
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091984605163, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [71], "files_L6": [69], "score": -1, "input_data_size": 15939098, "oldest_snapshot_seqno": -1}
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #72: 6535 keys, 12460113 bytes, temperature: kUnknown
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091984675849, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 72, "file_size": 12460113, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12419575, "index_size": 23082, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16389, "raw_key_size": 171757, "raw_average_key_size": 26, "raw_value_size": 12304885, "raw_average_value_size": 1882, "num_data_blocks": 907, "num_entries": 6535, "num_filter_entries": 6535, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760091984, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 72, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:26:24.676260) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 12460113 bytes
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:26:24.677820) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 225.2 rd, 176.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 14.2 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(26.3) write-amplify(11.5) OK, records in: 7014, records dropped: 479 output_compression: NoCompression
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:26:24.677901) EVENT_LOG_v1 {"time_micros": 1760091984677882, "job": 42, "event": "compaction_finished", "compaction_time_micros": 70779, "compaction_time_cpu_micros": 45566, "output_level": 6, "num_output_files": 1, "total_output_size": 12460113, "num_input_records": 7014, "num_output_records": 6535, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091984678543, "job": 42, "event": "table_file_deletion", "file_number": 71}
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000069.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760091984684410, "job": 42, "event": "table_file_deletion", "file_number": 69}
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:26:24.604978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:26:24.684545) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:26:24.684552) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:26:24.684554) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:26:24.684556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:26:24 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:26:24.684558) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:26:25 compute-1 ceph-mon[79167]: pgmap v1215: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:26:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:26:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:25.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:26:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:25.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:26 compute-1 nova_compute[235132]: 2025-10-10 10:26:26.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:27.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:27 compute-1 ceph-mon[79167]: pgmap v1216: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/1280316982' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:26:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/1280316982' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:26:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:26:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:26:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:27.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:26:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:26:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:26:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:26:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:26:28 compute-1 nova_compute[235132]: 2025-10-10 10:26:28.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:29.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:29 compute-1 ceph-mon[79167]: pgmap v1217: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:26:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:29.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:31 compute-1 nova_compute[235132]: 2025-10-10 10:26:31.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:26:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:31.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:26:31 compute-1 ceph-mon[79167]: pgmap v1218: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:31.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:32 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:26:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:26:32 compute-1 podman[257450]: 2025-10-10 10:26:32.983101295 +0000 UTC m=+0.082908818 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 10 10:26:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:26:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:26:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:26:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:26:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:26:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:33.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:26:33 compute-1 ceph-mon[79167]: pgmap v1219: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:33.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:33 compute-1 nova_compute[235132]: 2025-10-10 10:26:33.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:35.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:35 compute-1 ceph-mon[79167]: pgmap v1220: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:26:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:35.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:36 compute-1 nova_compute[235132]: 2025-10-10 10:26:36.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:26:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:37.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:26:37 compute-1 ceph-mon[79167]: pgmap v1221: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:26:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:26:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:37.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:26:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:26:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:26:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:26:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:26:38 compute-1 nova_compute[235132]: 2025-10-10 10:26:38.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:39.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:39 compute-1 ceph-mon[79167]: pgmap v1222: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:26:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:39.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:41 compute-1 nova_compute[235132]: 2025-10-10 10:26:41.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:26:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:41.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:26:41 compute-1 ceph-mon[79167]: pgmap v1223: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:41.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:26:42.226 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:26:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:26:42.227 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:26:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:26:42.227 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:26:42 compute-1 sudo[257475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:26:42 compute-1 sudo[257475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:26:42 compute-1 sudo[257475]: pam_unix(sudo:session): session closed for user root
Oct 10 10:26:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:26:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:26:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:26:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:26:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:26:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:26:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:43.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:26:43 compute-1 ceph-mon[79167]: pgmap v1224: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:26:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:43.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:26:43 compute-1 nova_compute[235132]: 2025-10-10 10:26:43.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:44 compute-1 podman[257502]: 2025-10-10 10:26:44.011827927 +0000 UTC m=+0.104327023 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 10 10:26:44 compute-1 podman[257501]: 2025-10-10 10:26:44.025997574 +0000 UTC m=+0.122936650 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:26:44 compute-1 podman[257503]: 2025-10-10 10:26:44.062453371 +0000 UTC m=+0.146352682 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_controller)
Oct 10 10:26:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:26:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:45.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:26:45 compute-1 ceph-mon[79167]: pgmap v1225: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:26:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:45.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:46 compute-1 nova_compute[235132]: 2025-10-10 10:26:46.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:47.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:47 compute-1 ceph-mon[79167]: pgmap v1226: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:47 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:26:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:26:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:47.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:26:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:26:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:26:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:26:48 compute-1 nova_compute[235132]: 2025-10-10 10:26:48.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:26:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:49.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:26:49 compute-1 ceph-mon[79167]: pgmap v1227: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:26:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:49.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:50 compute-1 sudo[257567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:26:50 compute-1 sudo[257567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:26:50 compute-1 sudo[257567]: pam_unix(sudo:session): session closed for user root
Oct 10 10:26:50 compute-1 sudo[257592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:26:50 compute-1 sudo[257592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:26:51 compute-1 nova_compute[235132]: 2025-10-10 10:26:51.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:51.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:51 compute-1 ceph-mon[79167]: pgmap v1228: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:26:51 compute-1 sudo[257592]: pam_unix(sudo:session): session closed for user root
Oct 10 10:26:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:51.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:52 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:26:52 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:26:52 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:26:52 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:26:52 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:26:52 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:26:52 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:26:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:26:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:26:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:26:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:26:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:26:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:53.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:53 compute-1 ceph-mon[79167]: pgmap v1229: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:26:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:53.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:53 compute-1 nova_compute[235132]: 2025-10-10 10:26:53.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:26:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:55.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:26:55 compute-1 ceph-mon[79167]: pgmap v1230: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Oct 10 10:26:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:55.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:56 compute-1 nova_compute[235132]: 2025-10-10 10:26:56.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:56 compute-1 sudo[257652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:26:56 compute-1 sudo[257652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:26:56 compute-1 sudo[257652]: pam_unix(sudo:session): session closed for user root
Oct 10 10:26:57 compute-1 ceph-mon[79167]: pgmap v1231: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:26:57 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:26:57 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:26:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:57.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:26:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:57.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:26:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:26:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:26:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:26:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:26:58 compute-1 nova_compute[235132]: 2025-10-10 10:26:58.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:26:59 compute-1 ceph-mon[79167]: pgmap v1232: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Oct 10 10:26:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:26:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:26:59.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:26:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:26:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:26:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:26:59.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:27:01 compute-1 nova_compute[235132]: 2025-10-10 10:27:01.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:01 compute-1 nova_compute[235132]: 2025-10-10 10:27:01.066 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:27:01 compute-1 ceph-mon[79167]: pgmap v1233: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:27:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:27:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:01.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:27:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:01.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:02 compute-1 nova_compute[235132]: 2025-10-10 10:27:02.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:27:02 compute-1 nova_compute[235132]: 2025-10-10 10:27:02.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:27:02 compute-1 nova_compute[235132]: 2025-10-10 10:27:02.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:27:02 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:27:02 compute-1 sudo[257680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:27:02 compute-1 sudo[257680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:27:02 compute-1 sudo[257680]: pam_unix(sudo:session): session closed for user root
Oct 10 10:27:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:27:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:27:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:27:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:27:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:27:03 compute-1 nova_compute[235132]: 2025-10-10 10:27:03.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:27:03 compute-1 ceph-mon[79167]: pgmap v1234: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:27:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:03.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:03.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:03 compute-1 nova_compute[235132]: 2025-10-10 10:27:03.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:03 compute-1 podman[257706]: 2025-10-10 10:27:03.99829934 +0000 UTC m=+0.093751244 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:27:05 compute-1 nova_compute[235132]: 2025-10-10 10:27:05.040 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:27:05 compute-1 ceph-mon[79167]: pgmap v1235: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:27:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:05.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:05.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:06 compute-1 nova_compute[235132]: 2025-10-10 10:27:06.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:06 compute-1 nova_compute[235132]: 2025-10-10 10:27:06.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:27:06 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/821806748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:27:06 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2564603434' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:27:07 compute-1 ceph-mon[79167]: pgmap v1236: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:07.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:27:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:07.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:27:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:27:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:27:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:27:08 compute-1 nova_compute[235132]: 2025-10-10 10:27:08.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:27:08 compute-1 nova_compute[235132]: 2025-10-10 10:27:08.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:27:08 compute-1 nova_compute[235132]: 2025-10-10 10:27:08.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:27:08 compute-1 nova_compute[235132]: 2025-10-10 10:27:08.062 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:27:08 compute-1 nova_compute[235132]: 2025-10-10 10:27:08.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:09 compute-1 ceph-mon[79167]: pgmap v1237: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 0 B/s wr, 174 op/s
Oct 10 10:27:09 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3249927567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:27:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:09.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:27:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:09.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:27:10 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3915087126' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:27:11 compute-1 nova_compute[235132]: 2025-10-10 10:27:11.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:11 compute-1 ceph-mon[79167]: pgmap v1238: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 0 B/s wr, 173 op/s
Oct 10 10:27:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:11.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:11.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:12 compute-1 nova_compute[235132]: 2025-10-10 10:27:12.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:27:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:27:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:27:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:27:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:27:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:27:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:27:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:13.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:27:13 compute-1 ceph-mon[79167]: pgmap v1239: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 0 B/s wr, 173 op/s
Oct 10 10:27:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:13.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:13 compute-1 nova_compute[235132]: 2025-10-10 10:27:13.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:14 compute-1 nova_compute[235132]: 2025-10-10 10:27:14.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:27:14 compute-1 nova_compute[235132]: 2025-10-10 10:27:14.164 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:27:14 compute-1 nova_compute[235132]: 2025-10-10 10:27:14.165 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:27:14 compute-1 nova_compute[235132]: 2025-10-10 10:27:14.165 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:27:14 compute-1 nova_compute[235132]: 2025-10-10 10:27:14.165 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:27:14 compute-1 nova_compute[235132]: 2025-10-10 10:27:14.165 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:27:14 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:27:14 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4181353496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:27:14 compute-1 nova_compute[235132]: 2025-10-10 10:27:14.664 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:27:14 compute-1 nova_compute[235132]: 2025-10-10 10:27:14.873 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:27:14 compute-1 nova_compute[235132]: 2025-10-10 10:27:14.875 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4846MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:27:14 compute-1 nova_compute[235132]: 2025-10-10 10:27:14.875 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:27:14 compute-1 nova_compute[235132]: 2025-10-10 10:27:14.875 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:27:14 compute-1 podman[257751]: 2025-10-10 10:27:14.947686605 +0000 UTC m=+0.055713443 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 10 10:27:14 compute-1 podman[257752]: 2025-10-10 10:27:14.95368773 +0000 UTC m=+0.060046633 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Oct 10 10:27:15 compute-1 podman[257753]: 2025-10-10 10:27:15.031498606 +0000 UTC m=+0.134602180 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 10 10:27:15 compute-1 nova_compute[235132]: 2025-10-10 10:27:15.127 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:27:15 compute-1 nova_compute[235132]: 2025-10-10 10:27:15.128 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:27:15 compute-1 nova_compute[235132]: 2025-10-10 10:27:15.157 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:27:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:15.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:15 compute-1 ceph-mon[79167]: pgmap v1240: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 0 B/s wr, 174 op/s
Oct 10 10:27:15 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/4181353496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:27:15 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:27:15 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/106958647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:27:15 compute-1 nova_compute[235132]: 2025-10-10 10:27:15.684 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:27:15 compute-1 nova_compute[235132]: 2025-10-10 10:27:15.692 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:27:15 compute-1 nova_compute[235132]: 2025-10-10 10:27:15.743 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:27:15 compute-1 nova_compute[235132]: 2025-10-10 10:27:15.745 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:27:15 compute-1 nova_compute[235132]: 2025-10-10 10:27:15.745 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:27:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:27:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:15.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:27:16 compute-1 nova_compute[235132]: 2025-10-10 10:27:16.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:16 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/106958647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:27:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:27:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:17.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:27:17 compute-1 ceph-mon[79167]: pgmap v1241: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 0 B/s wr, 173 op/s
Oct 10 10:27:17 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:27:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:27:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:27:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:17.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:27:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:27:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:27:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:27:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:27:18 compute-1 nova_compute[235132]: 2025-10-10 10:27:18.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:19.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:19 compute-1 ceph-mon[79167]: pgmap v1242: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 0 B/s wr, 174 op/s
Oct 10 10:27:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:19.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:21 compute-1 nova_compute[235132]: 2025-10-10 10:27:21.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:21.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:21 compute-1 ceph-mon[79167]: pgmap v1243: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:27:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:21.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:27:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:27:22 compute-1 sudo[257841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:27:22 compute-1 sudo[257841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:27:22 compute-1 sudo[257841]: pam_unix(sudo:session): session closed for user root
Oct 10 10:27:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:27:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:27:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:27:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:27:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:23.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:23 compute-1 ceph-mon[79167]: pgmap v1244: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:27:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:23.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:27:23 compute-1 nova_compute[235132]: 2025-10-10 10:27:23.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:27:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:25.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:27:25 compute-1 ceph-mon[79167]: pgmap v1245: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:27:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:25.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:26 compute-1 nova_compute[235132]: 2025-10-10 10:27:26.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:27:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:27.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:27:27 compute-1 ceph-mon[79167]: pgmap v1246: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/4007566910' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:27:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/4007566910' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:27:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:27:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:27.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:27:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:27:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:27:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:27:28 compute-1 nova_compute[235132]: 2025-10-10 10:27:28.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:27:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:29.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:27:29 compute-1 ceph-mon[79167]: pgmap v1247: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:27:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:27:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:29.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:27:31 compute-1 nova_compute[235132]: 2025-10-10 10:27:31.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:27:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:31.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:27:31 compute-1 nova_compute[235132]: 2025-10-10 10:27:31.315 2 DEBUG oslo_concurrency.processutils [None req-5428eec2-0e0c-4df7-adf7-b6b22d8050c9 e1aed125091e48e09d5990f110c14c39 ec962e275689437d80680ff3ea69c852 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:27:31 compute-1 ceph-mon[79167]: pgmap v1248: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:31 compute-1 nova_compute[235132]: 2025-10-10 10:27:31.360 2 DEBUG oslo_concurrency.processutils [None req-5428eec2-0e0c-4df7-adf7-b6b22d8050c9 e1aed125091e48e09d5990f110c14c39 ec962e275689437d80680ff3ea69c852 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:27:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:31.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:32 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:27:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:27:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:27:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:27:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:27:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:27:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:33.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:33 compute-1 ceph-mon[79167]: pgmap v1249: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:27:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:33.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:27:33 compute-1 nova_compute[235132]: 2025-10-10 10:27:33.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:34 compute-1 podman[257874]: 2025-10-10 10:27:34.979725732 +0000 UTC m=+0.076827210 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 10 10:27:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:35.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:35 compute-1 ceph-mon[79167]: pgmap v1250: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:27:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:27:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:35.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:27:36 compute-1 nova_compute[235132]: 2025-10-10 10:27:36.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:36 compute-1 nova_compute[235132]: 2025-10-10 10:27:36.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:36 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:27:36.324 141156 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'da:dc:6a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '42:2f:dd:4e:d8:41'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 10 10:27:36 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:27:36.326 141156 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 10 10:27:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:27:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:37.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:27:37 compute-1 ceph-mon[79167]: pgmap v1251: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:27:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:27:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:37.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:27:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:27:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:27:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:27:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:27:38 compute-1 nova_compute[235132]: 2025-10-10 10:27:38.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:27:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:39.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:27:39 compute-1 ceph-mon[79167]: pgmap v1252: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:27:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:39.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:41 compute-1 nova_compute[235132]: 2025-10-10 10:27:41.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:41.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:41 compute-1 ceph-mon[79167]: pgmap v1253: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:41.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:27:42.227 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:27:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:27:42.228 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:27:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:27:42.228 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:27:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:27:42 compute-1 sudo[257898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:27:42 compute-1 sudo[257898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:27:42 compute-1 sudo[257898]: pam_unix(sudo:session): session closed for user root
Oct 10 10:27:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:27:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:27:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:27:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:27:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:43.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:43 compute-1 ceph-mon[79167]: pgmap v1254: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:27:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:43.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:27:43 compute-1 nova_compute[235132]: 2025-10-10 10:27:43.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:44 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:27:44.328 141156 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=ee0899c1-415d-4aa8-abe8-1240b4e8bf2c, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 10 10:27:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:45.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:45 compute-1 ceph-mon[79167]: pgmap v1255: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:27:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:45.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:45 compute-1 podman[257925]: 2025-10-10 10:27:45.983128932 +0000 UTC m=+0.089131207 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 10:27:45 compute-1 podman[257926]: 2025-10-10 10:27:45.9885347 +0000 UTC m=+0.073786828 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 10:27:46 compute-1 podman[257927]: 2025-10-10 10:27:46.050407361 +0000 UTC m=+0.132039180 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 10 10:27:46 compute-1 nova_compute[235132]: 2025-10-10 10:27:46.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:46 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:27:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:47.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:47 compute-1 ceph-mon[79167]: pgmap v1256: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:27:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:47.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:27:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:27:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:27:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:27:48 compute-1 ceph-mon[79167]: pgmap v1257: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:27:48 compute-1 nova_compute[235132]: 2025-10-10 10:27:48.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:27:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:49.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:27:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:27:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:49.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:27:50 compute-1 ceph-mon[79167]: pgmap v1258: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:51 compute-1 nova_compute[235132]: 2025-10-10 10:27:51.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:51.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:51.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:52 compute-1 ceph-mon[79167]: pgmap v1259: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:27:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:27:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:27:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:27:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:27:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:27:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:53.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:27:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:27:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:53.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:27:53 compute-1 nova_compute[235132]: 2025-10-10 10:27:53.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:54 compute-1 ceph-mon[79167]: pgmap v1260: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:27:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:27:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:55.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:27:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 10:27:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:55.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 10:27:56 compute-1 nova_compute[235132]: 2025-10-10 10:27:56.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:56 compute-1 sudo[257991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:27:56 compute-1 sudo[257991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:27:56 compute-1 sudo[257991]: pam_unix(sudo:session): session closed for user root
Oct 10 10:27:56 compute-1 sudo[258016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:27:56 compute-1 sudo[258016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:27:56 compute-1 ceph-mon[79167]: pgmap v1261: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:27:57 compute-1 sudo[258016]: pam_unix(sudo:session): session closed for user root
Oct 10 10:27:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:57.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:27:57 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:27:57 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:27:57 compute-1 ceph-mon[79167]: pgmap v1262: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:27:57 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:27:57 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:27:57 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:27:57 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:27:57 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:27:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:27:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:57.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:27:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:27:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:27:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:27:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:27:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:27:58 compute-1 nova_compute[235132]: 2025-10-10 10:27:58.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:27:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:27:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:27:59.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:27:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:27:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:27:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:27:59.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:28:00 compute-1 ceph-mon[79167]: pgmap v1263: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:28:01 compute-1 nova_compute[235132]: 2025-10-10 10:28:01.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:01.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:01.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:02 compute-1 sudo[258075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:28:02 compute-1 sudo[258075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:28:02 compute-1 sudo[258075]: pam_unix(sudo:session): session closed for user root
Oct 10 10:28:02 compute-1 ceph-mon[79167]: pgmap v1264: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:28:02 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:28:02 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:28:02 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:28:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:28:02 compute-1 sudo[258100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:28:02 compute-1 sudo[258100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:28:02 compute-1 sudo[258100]: pam_unix(sudo:session): session closed for user root
Oct 10 10:28:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:28:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:28:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:28:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:28:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:03.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:03 compute-1 nova_compute[235132]: 2025-10-10 10:28:03.747 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:28:03 compute-1 nova_compute[235132]: 2025-10-10 10:28:03.747 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:28:03 compute-1 nova_compute[235132]: 2025-10-10 10:28:03.747 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:28:03 compute-1 nova_compute[235132]: 2025-10-10 10:28:03.747 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:28:03 compute-1 nova_compute[235132]: 2025-10-10 10:28:03.748 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:28:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:03.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:03 compute-1 nova_compute[235132]: 2025-10-10 10:28:03.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:04 compute-1 ceph-mon[79167]: pgmap v1265: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:28:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:05.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:28:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:05.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:28:05 compute-1 podman[258127]: 2025-10-10 10:28:05.95544528 +0000 UTC m=+0.056484256 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 10 10:28:06 compute-1 nova_compute[235132]: 2025-10-10 10:28:06.040 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:28:06 compute-1 nova_compute[235132]: 2025-10-10 10:28:06.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:06 compute-1 ceph-mon[79167]: pgmap v1266: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:28:07 compute-1 nova_compute[235132]: 2025-10-10 10:28:07.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:28:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:07.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:07 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2704639440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:28:07 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1632890228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:28:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:28:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:07.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:28:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:28:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:28:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:28:08 compute-1 ceph-mon[79167]: pgmap v1267: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:28:08 compute-1 nova_compute[235132]: 2025-10-10 10:28:08.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:09.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:09.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:10 compute-1 nova_compute[235132]: 2025-10-10 10:28:10.039 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:28:10 compute-1 nova_compute[235132]: 2025-10-10 10:28:10.062 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:28:10 compute-1 nova_compute[235132]: 2025-10-10 10:28:10.063 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:28:10 compute-1 nova_compute[235132]: 2025-10-10 10:28:10.063 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:28:10 compute-1 nova_compute[235132]: 2025-10-10 10:28:10.081 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:28:10 compute-1 ceph-mon[79167]: pgmap v1268: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:28:11 compute-1 nova_compute[235132]: 2025-10-10 10:28:11.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:11.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:11 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/577030174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:28:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:11.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:12 compute-1 ceph-mon[79167]: pgmap v1269: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:12 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2943645577' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:28:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:28:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:28:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:28:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:28:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:28:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:28:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:13.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:28:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:28:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:13.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:28:13 compute-1 nova_compute[235132]: 2025-10-10 10:28:13.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:14 compute-1 nova_compute[235132]: 2025-10-10 10:28:14.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:28:14 compute-1 nova_compute[235132]: 2025-10-10 10:28:14.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:28:14 compute-1 nova_compute[235132]: 2025-10-10 10:28:14.083 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:28:14 compute-1 nova_compute[235132]: 2025-10-10 10:28:14.083 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:28:14 compute-1 nova_compute[235132]: 2025-10-10 10:28:14.084 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:28:14 compute-1 nova_compute[235132]: 2025-10-10 10:28:14.084 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:28:14 compute-1 nova_compute[235132]: 2025-10-10 10:28:14.084 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:28:14 compute-1 ceph-mon[79167]: pgmap v1270: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:28:14 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:28:14 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3528792367' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:28:14 compute-1 nova_compute[235132]: 2025-10-10 10:28:14.620 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:28:14 compute-1 nova_compute[235132]: 2025-10-10 10:28:14.853 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:28:14 compute-1 nova_compute[235132]: 2025-10-10 10:28:14.855 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4863MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:28:14 compute-1 nova_compute[235132]: 2025-10-10 10:28:14.855 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:28:14 compute-1 nova_compute[235132]: 2025-10-10 10:28:14.856 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:28:14 compute-1 nova_compute[235132]: 2025-10-10 10:28:14.941 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:28:14 compute-1 nova_compute[235132]: 2025-10-10 10:28:14.942 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:28:14 compute-1 nova_compute[235132]: 2025-10-10 10:28:14.978 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:28:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:28:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:15.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:28:15 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3528792367' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:28:15 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:28:15 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1813754537' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:28:15 compute-1 nova_compute[235132]: 2025-10-10 10:28:15.449 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:28:15 compute-1 nova_compute[235132]: 2025-10-10 10:28:15.460 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:28:15 compute-1 nova_compute[235132]: 2025-10-10 10:28:15.478 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:28:15 compute-1 nova_compute[235132]: 2025-10-10 10:28:15.481 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:28:15 compute-1 nova_compute[235132]: 2025-10-10 10:28:15.482 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:28:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:28:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:15.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:28:16 compute-1 nova_compute[235132]: 2025-10-10 10:28:16.167 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:16 compute-1 ceph-mon[79167]: pgmap v1271: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:16 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1813754537' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:28:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:28:16 compute-1 podman[258195]: 2025-10-10 10:28:16.988720848 +0000 UTC m=+0.083882554 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 10 10:28:16 compute-1 podman[258196]: 2025-10-10 10:28:16.998296389 +0000 UTC m=+0.086275389 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd)
Oct 10 10:28:17 compute-1 podman[258197]: 2025-10-10 10:28:17.038782706 +0000 UTC m=+0.126918480 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:28:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:17.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:28:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:17.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:28:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:28:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:28:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:28:18 compute-1 ceph-mon[79167]: pgmap v1272: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:18 compute-1 nova_compute[235132]: 2025-10-10 10:28:18.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:19.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:19.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:20 compute-1 ceph-mon[79167]: pgmap v1273: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:28:21 compute-1 nova_compute[235132]: 2025-10-10 10:28:21.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:21.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:21.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:22 compute-1 ceph-mon[79167]: pgmap v1274: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:28:22 compute-1 sudo[258265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:28:22 compute-1 sudo[258265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:28:22 compute-1 sudo[258265]: pam_unix(sudo:session): session closed for user root
Oct 10 10:28:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:28:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:28:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:28:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:28:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:23.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:28:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:23.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:28:23 compute-1 nova_compute[235132]: 2025-10-10 10:28:23.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:24 compute-1 ceph-mon[79167]: pgmap v1275: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:28:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:28:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:25.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:28:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:25.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:26 compute-1 nova_compute[235132]: 2025-10-10 10:28:26.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:26 compute-1 ceph-mon[79167]: pgmap v1276: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:26 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/3744893259' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:28:26 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/3744893259' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:28:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:28:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:27.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #73. Immutable memtables: 0.
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:28:27.516413) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 73
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092107516499, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1448, "num_deletes": 251, "total_data_size": 3527461, "memory_usage": 3577536, "flush_reason": "Manual Compaction"}
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #74: started
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092107535495, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 74, "file_size": 2302543, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37894, "largest_seqno": 39337, "table_properties": {"data_size": 2296439, "index_size": 3367, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13187, "raw_average_key_size": 20, "raw_value_size": 2284080, "raw_average_value_size": 3465, "num_data_blocks": 147, "num_entries": 659, "num_filter_entries": 659, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760091985, "oldest_key_time": 1760091985, "file_creation_time": 1760092107, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 19297 microseconds, and 11453 cpu microseconds.
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:28:27.535719) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #74: 2302543 bytes OK
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:28:27.535805) [db/memtable_list.cc:519] [default] Level-0 commit table #74 started
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:28:27.537589) [db/memtable_list.cc:722] [default] Level-0 commit table #74: memtable #1 done
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:28:27.537616) EVENT_LOG_v1 {"time_micros": 1760092107537607, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:28:27.537641) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 3520674, prev total WAL file size 3520674, number of live WAL files 2.
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000070.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:28:27.539926) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [74(2248KB)], [72(11MB)]
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092107539963, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [74], "files_L6": [72], "score": -1, "input_data_size": 14762656, "oldest_snapshot_seqno": -1}
Oct 10 10:28:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #75: 6678 keys, 12616581 bytes, temperature: kUnknown
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092107607618, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 75, "file_size": 12616581, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12574968, "index_size": 23837, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16709, "raw_key_size": 175432, "raw_average_key_size": 26, "raw_value_size": 12457514, "raw_average_value_size": 1865, "num_data_blocks": 936, "num_entries": 6678, "num_filter_entries": 6678, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760092107, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 75, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:28:27.607957) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 12616581 bytes
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:28:27.609701) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 217.8 rd, 186.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 11.9 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(11.9) write-amplify(5.5) OK, records in: 7194, records dropped: 516 output_compression: NoCompression
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:28:27.609731) EVENT_LOG_v1 {"time_micros": 1760092107609717, "job": 44, "event": "compaction_finished", "compaction_time_micros": 67787, "compaction_time_cpu_micros": 34512, "output_level": 6, "num_output_files": 1, "total_output_size": 12616581, "num_input_records": 7194, "num_output_records": 6678, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092107611061, "job": 44, "event": "table_file_deletion", "file_number": 74}
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000072.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092107615637, "job": 44, "event": "table_file_deletion", "file_number": 72}
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:28:27.539814) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:28:27.615732) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:28:27.615737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:28:27.615740) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:28:27.615743) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:28:27 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:28:27.615745) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:28:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:27.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:28:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:28:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:28:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:28:28 compute-1 ceph-mon[79167]: pgmap v1277: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:28 compute-1 nova_compute[235132]: 2025-10-10 10:28:28.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:29.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:29 compute-1 ceph-mon[79167]: pgmap v1278: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:28:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:28:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:29.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:28:31 compute-1 nova_compute[235132]: 2025-10-10 10:28:31.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:31.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:28:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:31.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:28:32 compute-1 ceph-mon[79167]: pgmap v1279: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:32 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:28:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:28:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:28:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:28:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:28:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:28:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:28:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:33.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:28:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:28:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:33.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:28:33 compute-1 nova_compute[235132]: 2025-10-10 10:28:33.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:34 compute-1 ceph-mon[79167]: pgmap v1280: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:28:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:28:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:35.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:28:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:35.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:36 compute-1 nova_compute[235132]: 2025-10-10 10:28:36.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:36 compute-1 ceph-mon[79167]: pgmap v1281: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:36 compute-1 podman[258297]: 2025-10-10 10:28:36.9781788 +0000 UTC m=+0.074876328 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct 10 10:28:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:28:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:37.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:28:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:28:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:37.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:28:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:28:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:28:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:28:38 compute-1 ceph-mon[79167]: pgmap v1282: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:38 compute-1 nova_compute[235132]: 2025-10-10 10:28:38.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:28:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:39.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:28:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:39.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:40 compute-1 ceph-mon[79167]: pgmap v1283: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:28:41 compute-1 nova_compute[235132]: 2025-10-10 10:28:41.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:41.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:28:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:41.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:28:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:28:42.228 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:28:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:28:42.229 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:28:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:28:42.229 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:28:42 compute-1 ceph-mon[79167]: pgmap v1284: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:28:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:28:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:28:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:28:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:28:43 compute-1 sudo[258320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:28:43 compute-1 sudo[258320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:28:43 compute-1 sudo[258320]: pam_unix(sudo:session): session closed for user root
Oct 10 10:28:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.002000054s ======
Oct 10 10:28:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:43.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Oct 10 10:28:43 compute-1 nova_compute[235132]: 2025-10-10 10:28:43.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:43.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:44 compute-1 ceph-mon[79167]: pgmap v1285: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:28:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:28:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:45.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:28:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:45.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:46 compute-1 nova_compute[235132]: 2025-10-10 10:28:46.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:46 compute-1 ceph-mon[79167]: pgmap v1286: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:46 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:28:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:47.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:28:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:47.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:47 compute-1 podman[258348]: 2025-10-10 10:28:47.956170936 +0000 UTC m=+0.063653641 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:28:47 compute-1 podman[258349]: 2025-10-10 10:28:47.972597675 +0000 UTC m=+0.070300283 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 10 10:28:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:28:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:28:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:28:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:28:48 compute-1 podman[258350]: 2025-10-10 10:28:48.031265049 +0000 UTC m=+0.118126580 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 10 10:28:48 compute-1 ceph-mon[79167]: pgmap v1287: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:48 compute-1 nova_compute[235132]: 2025-10-10 10:28:48.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:49.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:28:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:49.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:28:50 compute-1 ceph-mon[79167]: pgmap v1288: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:28:51 compute-1 nova_compute[235132]: 2025-10-10 10:28:51.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:28:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:51.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:28:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:51.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:52 compute-1 ceph-mon[79167]: pgmap v1289: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:28:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:28:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:28:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:28:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:28:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:28:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:53.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:28:53 compute-1 nova_compute[235132]: 2025-10-10 10:28:53.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000026s ======
Oct 10 10:28:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:53.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Oct 10 10:28:54 compute-1 ceph-mon[79167]: pgmap v1290: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:28:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:55.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:55.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:56 compute-1 nova_compute[235132]: 2025-10-10 10:28:56.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:56 compute-1 ceph-mon[79167]: pgmap v1291: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:28:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:57.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:28:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:28:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:57.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:28:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:28:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:28:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:28:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:28:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:28:58 compute-1 ceph-mon[79167]: pgmap v1292: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:28:58 compute-1 nova_compute[235132]: 2025-10-10 10:28:58.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:28:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:28:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:28:59.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:28:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:28:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:28:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:28:59.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:00 compute-1 ceph-mon[79167]: pgmap v1293: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:01 compute-1 nova_compute[235132]: 2025-10-10 10:29:01.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:29:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:01.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:29:01 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:29:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:29:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:01.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:29:02 compute-1 sudo[258422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:29:02 compute-1 sudo[258422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:29:02 compute-1 sudo[258422]: pam_unix(sudo:session): session closed for user root
Oct 10 10:29:02 compute-1 sudo[258447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:29:02 compute-1 sudo[258447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:29:02 compute-1 ceph-mon[79167]: pgmap v1294: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:29:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:29:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:29:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:29:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:29:03 compute-1 sudo[258447]: pam_unix(sudo:session): session closed for user root
Oct 10 10:29:03 compute-1 sudo[258503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:29:03 compute-1 sudo[258503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:29:03 compute-1 sudo[258503]: pam_unix(sudo:session): session closed for user root
Oct 10 10:29:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:29:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:03.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:29:03 compute-1 nova_compute[235132]: 2025-10-10 10:29:03.483 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:29:03 compute-1 nova_compute[235132]: 2025-10-10 10:29:03.483 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:29:03 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:29:03 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:29:03 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:29:03 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:29:03 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:29:03 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:29:03 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:29:03 compute-1 nova_compute[235132]: 2025-10-10 10:29:03.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:03.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:04 compute-1 nova_compute[235132]: 2025-10-10 10:29:04.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:29:04 compute-1 nova_compute[235132]: 2025-10-10 10:29:04.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:29:04 compute-1 nova_compute[235132]: 2025-10-10 10:29:04.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:29:04 compute-1 ceph-mon[79167]: pgmap v1295: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:05.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:05 compute-1 ceph-mon[79167]: pgmap v1296: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:29:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:05.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:06 compute-1 nova_compute[235132]: 2025-10-10 10:29:06.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:07 compute-1 nova_compute[235132]: 2025-10-10 10:29:07.040 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:29:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:07.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:29:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:29:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:07.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:29:07 compute-1 podman[258531]: 2025-10-10 10:29:07.965765441 +0000 UTC m=+0.070025126 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 10 10:29:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:29:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:29:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:29:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:29:08 compute-1 sudo[258550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:29:08 compute-1 sudo[258550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:29:08 compute-1 sudo[258550]: pam_unix(sudo:session): session closed for user root
Oct 10 10:29:08 compute-1 ceph-mon[79167]: pgmap v1297: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:29:08 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:29:08 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:29:08 compute-1 nova_compute[235132]: 2025-10-10 10:29:08.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:09 compute-1 nova_compute[235132]: 2025-10-10 10:29:09.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:29:09 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2349870666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:29:09 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2057179231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:29:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:09.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:09.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:10 compute-1 ceph-mon[79167]: pgmap v1298: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:11 compute-1 nova_compute[235132]: 2025-10-10 10:29:11.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:11 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1966094890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:29:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:29:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:11.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:29:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:11.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:12 compute-1 nova_compute[235132]: 2025-10-10 10:29:12.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:29:12 compute-1 nova_compute[235132]: 2025-10-10 10:29:12.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:29:12 compute-1 nova_compute[235132]: 2025-10-10 10:29:12.046 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:29:12 compute-1 nova_compute[235132]: 2025-10-10 10:29:12.066 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:29:12 compute-1 ceph-mon[79167]: pgmap v1299: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 10 10:29:12 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2534647443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:29:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:29:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:29:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:29:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:29:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:29:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:29:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:13.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:29:13 compute-1 nova_compute[235132]: 2025-10-10 10:29:13.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:13.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:14 compute-1 ceph-mon[79167]: pgmap v1300: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:15 compute-1 nova_compute[235132]: 2025-10-10 10:29:15.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:29:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:15.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:15.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:16 compute-1 nova_compute[235132]: 2025-10-10 10:29:16.043 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:29:16 compute-1 nova_compute[235132]: 2025-10-10 10:29:16.070 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:29:16 compute-1 nova_compute[235132]: 2025-10-10 10:29:16.070 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:29:16 compute-1 nova_compute[235132]: 2025-10-10 10:29:16.071 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:29:16 compute-1 nova_compute[235132]: 2025-10-10 10:29:16.071 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:29:16 compute-1 nova_compute[235132]: 2025-10-10 10:29:16.071 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:29:16 compute-1 nova_compute[235132]: 2025-10-10 10:29:16.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:16 compute-1 ceph-mon[79167]: pgmap v1301: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:16 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:29:16 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1325988144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:29:16 compute-1 nova_compute[235132]: 2025-10-10 10:29:16.522 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:29:16 compute-1 nova_compute[235132]: 2025-10-10 10:29:16.766 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:29:16 compute-1 nova_compute[235132]: 2025-10-10 10:29:16.767 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4834MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:29:16 compute-1 nova_compute[235132]: 2025-10-10 10:29:16.767 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:29:16 compute-1 nova_compute[235132]: 2025-10-10 10:29:16.768 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:29:16 compute-1 nova_compute[235132]: 2025-10-10 10:29:16.857 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:29:16 compute-1 nova_compute[235132]: 2025-10-10 10:29:16.858 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:29:16 compute-1 nova_compute[235132]: 2025-10-10 10:29:16.879 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:29:17 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:29:17 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1325988144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:29:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:29:17 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2046352366' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:29:17 compute-1 nova_compute[235132]: 2025-10-10 10:29:17.397 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:29:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:17.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:17 compute-1 nova_compute[235132]: 2025-10-10 10:29:17.405 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:29:17 compute-1 nova_compute[235132]: 2025-10-10 10:29:17.424 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:29:17 compute-1 nova_compute[235132]: 2025-10-10 10:29:17.426 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:29:17 compute-1 nova_compute[235132]: 2025-10-10 10:29:17.427 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:29:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:29:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:17.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:29:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:29:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:29:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:29:18 compute-1 ceph-mon[79167]: pgmap v1302: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:18 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2046352366' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:29:18 compute-1 nova_compute[235132]: 2025-10-10 10:29:18.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:19 compute-1 podman[258625]: 2025-10-10 10:29:19.026781886 +0000 UTC m=+0.128359239 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:29:19 compute-1 podman[258626]: 2025-10-10 10:29:19.045784995 +0000 UTC m=+0.142356092 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct 10 10:29:19 compute-1 podman[258627]: 2025-10-10 10:29:19.056896379 +0000 UTC m=+0.148003247 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct 10 10:29:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:19.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:29:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:19.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:29:20 compute-1 ceph-mon[79167]: pgmap v1303: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:21 compute-1 nova_compute[235132]: 2025-10-10 10:29:21.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:21.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:21.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:22 compute-1 ceph-mon[79167]: pgmap v1304: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:29:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:29:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:29:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:29:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:29:23 compute-1 sudo[258688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:29:23 compute-1 sudo[258688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:29:23 compute-1 sudo[258688]: pam_unix(sudo:session): session closed for user root
Oct 10 10:29:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:23.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:23.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:23 compute-1 nova_compute[235132]: 2025-10-10 10:29:23.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:24 compute-1 ceph-mon[79167]: pgmap v1305: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:29:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:25.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:29:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:25.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:26 compute-1 nova_compute[235132]: 2025-10-10 10:29:26.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:26 compute-1 ceph-mon[79167]: pgmap v1306: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/1412912917' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:29:27 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/1412912917' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:29:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:27.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:29:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:27.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:29:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:29:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:29:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:29:28 compute-1 ceph-mon[79167]: pgmap v1307: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:28 compute-1 nova_compute[235132]: 2025-10-10 10:29:28.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:29.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:29 compute-1 unix_chkpwd[258719]: password check failed for user (root)
Oct 10 10:29:29 compute-1 sshd-session[258716]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.119  user=root
Oct 10 10:29:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:29.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:30 compute-1 ceph-mon[79167]: pgmap v1308: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:31 compute-1 nova_compute[235132]: 2025-10-10 10:29:31.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:31.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:31 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:29:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:31.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:32 compute-1 sshd-session[258716]: Failed password for root from 80.94.93.119 port 11752 ssh2
Oct 10 10:29:32 compute-1 ceph-mon[79167]: pgmap v1309: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:29:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:29:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:29:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:29:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:29:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000028s ======
Oct 10 10:29:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:33.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Oct 10 10:29:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:33.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:33 compute-1 nova_compute[235132]: 2025-10-10 10:29:33.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:34 compute-1 unix_chkpwd[258722]: password check failed for user (root)
Oct 10 10:29:34 compute-1 ceph-mon[79167]: pgmap v1310: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:35.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:29:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:35.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:29:36 compute-1 sshd-session[258716]: Failed password for root from 80.94.93.119 port 11752 ssh2
Oct 10 10:29:36 compute-1 nova_compute[235132]: 2025-10-10 10:29:36.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:36 compute-1 ceph-mon[79167]: pgmap v1311: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:36 compute-1 unix_chkpwd[258724]: password check failed for user (root)
Oct 10 10:29:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:37.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:29:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:29:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:37.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:29:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:29:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:29:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:29:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:29:38 compute-1 sshd-session[258716]: Failed password for root from 80.94.93.119 port 11752 ssh2
Oct 10 10:29:38 compute-1 ceph-mon[79167]: pgmap v1312: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:38 compute-1 sshd-session[258716]: Received disconnect from 80.94.93.119 port 11752:11:  [preauth]
Oct 10 10:29:38 compute-1 sshd-session[258716]: Disconnected from authenticating user root 80.94.93.119 port 11752 [preauth]
Oct 10 10:29:38 compute-1 sshd-session[258716]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.119  user=root
Oct 10 10:29:39 compute-1 nova_compute[235132]: 2025-10-10 10:29:39.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:39 compute-1 podman[258726]: 2025-10-10 10:29:39.034724988 +0000 UTC m=+0.128819433 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:29:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:39.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:39 compute-1 unix_chkpwd[258750]: password check failed for user (root)
Oct 10 10:29:39 compute-1 sshd-session[258747]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.119  user=root
Oct 10 10:29:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:39.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:40 compute-1 ceph-mon[79167]: pgmap v1313: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:41 compute-1 nova_compute[235132]: 2025-10-10 10:29:41.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:29:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:41.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:29:41 compute-1 ceph-mon[79167]: pgmap v1314: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:41.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:42 compute-1 sshd-session[258747]: Failed password for root from 80.94.93.119 port 41928 ssh2
Oct 10 10:29:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:29:42.230 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:29:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:29:42.231 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:29:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:29:42.231 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:29:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:29:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:29:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:29:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:29:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:29:43 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 10:29:43 compute-1 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 10 10:29:43 compute-1 sudo[258753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:29:43 compute-1 sudo[258753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:29:43 compute-1 sudo[258753]: pam_unix(sudo:session): session closed for user root
Oct 10 10:29:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:43.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:43.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:44 compute-1 nova_compute[235132]: 2025-10-10 10:29:44.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:44 compute-1 ceph-mon[79167]: pgmap v1315: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:44 compute-1 unix_chkpwd[258779]: password check failed for user (root)
Oct 10 10:29:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:29:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:45.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:29:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:45.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:46 compute-1 ceph-mon[79167]: pgmap v1316: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:46 compute-1 sshd-session[258747]: Failed password for root from 80.94.93.119 port 41928 ssh2
Oct 10 10:29:46 compute-1 nova_compute[235132]: 2025-10-10 10:29:46.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:46 compute-1 unix_chkpwd[258781]: password check failed for user (root)
Oct 10 10:29:47 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:29:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:47.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:29:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:47.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:29:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:29:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:29:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:29:48 compute-1 ceph-mon[79167]: pgmap v1317: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:48 compute-1 sshd-session[258747]: Failed password for root from 80.94.93.119 port 41928 ssh2
Oct 10 10:29:48 compute-1 sshd-session[258747]: Received disconnect from 80.94.93.119 port 41928:11:  [preauth]
Oct 10 10:29:48 compute-1 sshd-session[258747]: Disconnected from authenticating user root 80.94.93.119 port 41928 [preauth]
Oct 10 10:29:48 compute-1 sshd-session[258747]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.119  user=root
Oct 10 10:29:49 compute-1 nova_compute[235132]: 2025-10-10 10:29:49.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:29:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:49.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:29:49 compute-1 unix_chkpwd[258786]: password check failed for user (root)
Oct 10 10:29:49 compute-1 sshd-session[258783]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.119  user=root
Oct 10 10:29:49 compute-1 podman[258787]: 2025-10-10 10:29:49.978555061 +0000 UTC m=+0.074331983 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 10 10:29:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:49.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:50 compute-1 podman[258788]: 2025-10-10 10:29:50.005654512 +0000 UTC m=+0.092530921 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 10 10:29:50 compute-1 podman[258789]: 2025-10-10 10:29:50.018456411 +0000 UTC m=+0.109199066 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 10 10:29:50 compute-1 ceph-mon[79167]: pgmap v1318: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:51 compute-1 sshd-session[258783]: Failed password for root from 80.94.93.119 port 37486 ssh2
Oct 10 10:29:51 compute-1 nova_compute[235132]: 2025-10-10 10:29:51.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:51.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:51.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:52 compute-1 unix_chkpwd[258850]: password check failed for user (root)
Oct 10 10:29:52 compute-1 ceph-mon[79167]: pgmap v1319: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:29:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:29:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:29:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:29:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:29:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:53.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:29:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:53.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:29:54 compute-1 nova_compute[235132]: 2025-10-10 10:29:54.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:54 compute-1 ceph-mon[79167]: pgmap v1320: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:29:54 compute-1 sshd-session[258783]: Failed password for root from 80.94.93.119 port 37486 ssh2
Oct 10 10:29:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:29:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:55.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:29:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:29:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:55.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:29:56 compute-1 ceph-mon[79167]: pgmap v1321: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:56 compute-1 nova_compute[235132]: 2025-10-10 10:29:56.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:56 compute-1 unix_chkpwd[258853]: password check failed for user (root)
Oct 10 10:29:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:29:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:57.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:29:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:29:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:29:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:57.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:29:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:29:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:29:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:29:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:29:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:29:58 compute-1 ceph-mon[79167]: pgmap v1322: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:29:58 compute-1 sshd-session[258783]: Failed password for root from 80.94.93.119 port 37486 ssh2
Oct 10 10:29:58 compute-1 sshd-session[258783]: Received disconnect from 80.94.93.119 port 37486:11:  [preauth]
Oct 10 10:29:58 compute-1 sshd-session[258783]: Disconnected from authenticating user root 80.94.93.119 port 37486 [preauth]
Oct 10 10:29:58 compute-1 sshd-session[258783]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=80.94.93.119  user=root
Oct 10 10:29:59 compute-1 nova_compute[235132]: 2025-10-10 10:29:59.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:29:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:29:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:29:59.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:29:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:29:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:29:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:29:59.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:30:00 compute-1 ceph-mon[79167]: pgmap v1323: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:30:00 compute-1 ceph-mon[79167]: Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Oct 10 10:30:00 compute-1 ceph-mon[79167]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Oct 10 10:30:00 compute-1 ceph-mon[79167]:     daemon nfs.cephfs.2.0.compute-0.ruydzo on compute-0 is in error state
Oct 10 10:30:01 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:30:01 compute-1 nova_compute[235132]: 2025-10-10 10:30:01.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:01.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:01.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:02 compute-1 ceph-mon[79167]: pgmap v1324: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:30:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:30:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:30:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:30:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:30:03 compute-1 nova_compute[235132]: 2025-10-10 10:30:03.427 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:30:03 compute-1 nova_compute[235132]: 2025-10-10 10:30:03.428 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:30:03 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:03 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:30:03 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:03.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:30:03 compute-1 sudo[258857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:30:03 compute-1 sudo[258857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:30:03 compute-1 sudo[258857]: pam_unix(sudo:session): session closed for user root
Oct 10 10:30:04 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:04 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:04 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:04.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:04 compute-1 nova_compute[235132]: 2025-10-10 10:30:04.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:04 compute-1 ceph-mon[79167]: pgmap v1325: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:30:05 compute-1 nova_compute[235132]: 2025-10-10 10:30:05.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:30:05 compute-1 nova_compute[235132]: 2025-10-10 10:30:05.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:30:05 compute-1 nova_compute[235132]: 2025-10-10 10:30:05.045 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 10 10:30:05 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:05 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:30:05 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:05.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:30:06 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:06 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:06 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:06.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:06 compute-1 nova_compute[235132]: 2025-10-10 10:30:06.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:06 compute-1 ceph-mon[79167]: pgmap v1326: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:07 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:07 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:07 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:07.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:07 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:30:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:07 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:30:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:30:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:30:08 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:08 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:30:08 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:08 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:08 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:08.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:08 compute-1 sudo[258885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:30:08 compute-1 sudo[258885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:30:08 compute-1 sudo[258885]: pam_unix(sudo:session): session closed for user root
Oct 10 10:30:08 compute-1 sudo[258910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 10 10:30:08 compute-1 sudo[258910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:30:08 compute-1 ceph-mon[79167]: pgmap v1327: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:08 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1914201702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:30:08 compute-1 podman[259009]: 2025-10-10 10:30:08.997570766 +0000 UTC m=+0.096749706 container exec 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 10 10:30:09 compute-1 nova_compute[235132]: 2025-10-10 10:30:09.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:09 compute-1 nova_compute[235132]: 2025-10-10 10:30:09.040 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:30:09 compute-1 podman[259009]: 2025-10-10 10:30:09.102039992 +0000 UTC m=+0.201218872 container exec_died 8a2c16c69263d410aefee28722d7403147210c67386f896d09115878d61e6ab8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-crash-compute-1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 10 10:30:09 compute-1 podman[259045]: 2025-10-10 10:30:09.256626778 +0000 UTC m=+0.077842779 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:30:09 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:09 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:30:09 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:09.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:30:09 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 10 10:30:09 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/919033109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:30:09 compute-1 podman[259149]: 2025-10-10 10:30:09.690822447 +0000 UTC m=+0.062041216 container exec db44b00524aaf85460d5de4878ad14e99cea939212a287f5217a7de1b31f7321 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 10:30:09 compute-1 podman[259149]: 2025-10-10 10:30:09.701926621 +0000 UTC m=+0.073145430 container exec_died db44b00524aaf85460d5de4878ad14e99cea939212a287f5217a7de1b31f7321 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-node-exporter-compute-1, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 10 10:30:10 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:10 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:10 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:10.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:10 compute-1 nova_compute[235132]: 2025-10-10 10:30:10.040 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:30:10 compute-1 nova_compute[235132]: 2025-10-10 10:30:10.061 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:30:10 compute-1 podman[259240]: 2025-10-10 10:30:10.158213864 +0000 UTC m=+0.082612350 container exec d3cf84749d9f2f04e4804a4d648101430763ca38f98380504c4e60979dc43596 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 10 10:30:10 compute-1 podman[259240]: 2025-10-10 10:30:10.167866618 +0000 UTC m=+0.092265094 container exec_died d3cf84749d9f2f04e4804a4d648101430763ca38f98380504c4e60979dc43596 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 10 10:30:10 compute-1 podman[259308]: 2025-10-10 10:30:10.464566788 +0000 UTC m=+0.073900660 container exec 7ce1be4884f313700b9f3fe6c66e14b163c4d6bf31089a45b751a6389f8be562 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw)
Oct 10 10:30:10 compute-1 podman[259308]: 2025-10-10 10:30:10.475921099 +0000 UTC m=+0.085254961 container exec_died 7ce1be4884f313700b9f3fe6c66e14b163c4d6bf31089a45b751a6389f8be562 (image=quay.io/ceph/haproxy:2.3, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-haproxy-nfs-cephfs-compute-1-ehhoyw)
Oct 10 10:30:10 compute-1 ceph-mon[79167]: pgmap v1328: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:30:10 compute-1 podman[259373]: 2025-10-10 10:30:10.728196445 +0000 UTC m=+0.064993328 container exec 8efe4c511a9ca69c4fbe77af128c823fd37bedd1e530d3bfb1fc1044e25a5138 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp, com.redhat.component=keepalived-container, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, build-date=2023-02-22T09:23:20, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, distribution-scope=public)
Oct 10 10:30:10 compute-1 podman[259373]: 2025-10-10 10:30:10.744294595 +0000 UTC m=+0.081091468 container exec_died 8efe4c511a9ca69c4fbe77af128c823fd37bedd1e530d3bfb1fc1044e25a5138 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-keepalived-nfs-cephfs-compute-1-twbftp, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, version=2.2.4, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=keepalived-container, release=1793, vendor=Red Hat, Inc., architecture=x86_64)
Oct 10 10:30:10 compute-1 sudo[258910]: pam_unix(sudo:session): session closed for user root
Oct 10 10:30:10 compute-1 sudo[259407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 10 10:30:10 compute-1 sudo[259407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:30:10 compute-1 sudo[259407]: pam_unix(sudo:session): session closed for user root
Oct 10 10:30:11 compute-1 sudo[259432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/21f084a3-af34-5230-afe4-ea5cd24a55f4/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 10 10:30:11 compute-1 sudo[259432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:30:11 compute-1 nova_compute[235132]: 2025-10-10 10:30:11.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:11 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:11 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:11 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:11.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:11 compute-1 sudo[259432]: pam_unix(sudo:session): session closed for user root
Oct 10 10:30:11 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:30:11 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:30:11 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1921870048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:30:11 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:30:11 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:30:11 compute-1 ceph-mon[79167]: pgmap v1329: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:11 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 10 10:30:11 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3445838709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:30:12 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:12 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:12 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:12.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:12 compute-1 nova_compute[235132]: 2025-10-10 10:30:12.045 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:30:12 compute-1 nova_compute[235132]: 2025-10-10 10:30:12.046 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 10 10:30:12 compute-1 nova_compute[235132]: 2025-10-10 10:30:12.046 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 10 10:30:12 compute-1 nova_compute[235132]: 2025-10-10 10:30:12.067 2 DEBUG nova.compute.manager [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 10 10:30:12 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:30:12 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 10 10:30:12 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:30:12 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 10 10:30:12 compute-1 ceph-mon[79167]: pgmap v1330: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 10 10:30:12 compute-1 ceph-mon[79167]: pgmap v1331: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 721 B/s rd, 0 op/s
Oct 10 10:30:12 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:30:12 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:30:12 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 10 10:30:12 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 10 10:30:12 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:30:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:30:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:30:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:12 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:30:13 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:13 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:30:13 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:13 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:30:13 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:13.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:30:14 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:14 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:30:14 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:14.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:30:14 compute-1 nova_compute[235132]: 2025-10-10 10:30:14.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:14 compute-1 ceph-mon[79167]: pgmap v1332: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Oct 10 10:30:15 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:15 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:15 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:15.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:16 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:16 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:30:16 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:16.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:30:16 compute-1 nova_compute[235132]: 2025-10-10 10:30:16.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:30:16 compute-1 nova_compute[235132]: 2025-10-10 10:30:16.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:16 compute-1 sudo[259491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 10 10:30:16 compute-1 sudo[259491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:30:16 compute-1 sudo[259491]: pam_unix(sudo:session): session closed for user root
Oct 10 10:30:16 compute-1 ceph-mon[79167]: pgmap v1333: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Oct 10 10:30:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:30:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:30:16 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' 
Oct 10 10:30:17 compute-1 nova_compute[235132]: 2025-10-10 10:30:17.044 2 DEBUG oslo_service.periodic_task [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 10 10:30:17 compute-1 nova_compute[235132]: 2025-10-10 10:30:17.083 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:30:17 compute-1 nova_compute[235132]: 2025-10-10 10:30:17.084 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:30:17 compute-1 nova_compute[235132]: 2025-10-10 10:30:17.085 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:30:17 compute-1 nova_compute[235132]: 2025-10-10 10:30:17.085 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Auditing locally available compute resources for compute-1.ctlplane.example.com (node: compute-1.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 10 10:30:17 compute-1 nova_compute[235132]: 2025-10-10 10:30:17.085 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:30:17 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:17 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:30:17 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:17.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:30:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:30:17 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/862817915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:30:17 compute-1 nova_compute[235132]: 2025-10-10 10:30:17.615 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:30:17 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:30:17 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/862817915' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:30:17 compute-1 nova_compute[235132]: 2025-10-10 10:30:17.806 2 WARNING nova.virt.libvirt.driver [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 10 10:30:17 compute-1 nova_compute[235132]: 2025-10-10 10:30:17.807 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Hypervisor/Node resource view: name=compute-1.ctlplane.example.com free_ram=4846MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 10 10:30:17 compute-1 nova_compute[235132]: 2025-10-10 10:30:17.807 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:30:17 compute-1 nova_compute[235132]: 2025-10-10 10:30:17.807 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:30:17 compute-1 nova_compute[235132]: 2025-10-10 10:30:17.877 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 10 10:30:17 compute-1 nova_compute[235132]: 2025-10-10 10:30:17.877 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Final resource view: name=compute-1.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 10 10:30:17 compute-1 nova_compute[235132]: 2025-10-10 10:30:17.893 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 10 10:30:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:30:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:30:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:17 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:30:18 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:18 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:30:18 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:18 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:18 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:18.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:18 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 10 10:30:18 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/832635326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:30:18 compute-1 nova_compute[235132]: 2025-10-10 10:30:18.398 2 DEBUG oslo_concurrency.processutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 10 10:30:18 compute-1 nova_compute[235132]: 2025-10-10 10:30:18.407 2 DEBUG nova.compute.provider_tree [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed in ProviderTree for provider: c9b2c4a3-cb19-4387-8719-36027e3cdaec update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 10 10:30:18 compute-1 nova_compute[235132]: 2025-10-10 10:30:18.433 2 DEBUG nova.scheduler.client.report [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Inventory has not changed for provider c9b2c4a3-cb19-4387-8719-36027e3cdaec based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 10 10:30:18 compute-1 nova_compute[235132]: 2025-10-10 10:30:18.436 2 DEBUG nova.compute.resource_tracker [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Compute_service record updated for compute-1.ctlplane.example.com:compute-1.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 10 10:30:18 compute-1 nova_compute[235132]: 2025-10-10 10:30:18.436 2 DEBUG oslo_concurrency.lockutils [None req-e32f27d5-bb7e-46de-9a38-d5dc1597316d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:30:18 compute-1 ceph-mon[79167]: pgmap v1334: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Oct 10 10:30:18 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/832635326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 10 10:30:19 compute-1 nova_compute[235132]: 2025-10-10 10:30:19.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:19 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:19 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:19 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:19.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:20 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:20 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:20 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:20.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:20 compute-1 ceph-mon[79167]: pgmap v1335: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Oct 10 10:30:21 compute-1 podman[259563]: 2025-10-10 10:30:21.004258253 +0000 UTC m=+0.095067740 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:30:21 compute-1 podman[259562]: 2025-10-10 10:30:21.055100086 +0000 UTC m=+0.147909317 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 10 10:30:21 compute-1 podman[259564]: 2025-10-10 10:30:21.07541703 +0000 UTC m=+0.162549815 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Oct 10 10:30:21 compute-1 nova_compute[235132]: 2025-10-10 10:30:21.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:21 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:21 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:21 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:21.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:22 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:22 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:22 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:22.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:22 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:30:22 compute-1 ceph-mon[79167]: pgmap v1336: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 10 10:30:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:30:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:30:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:22 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:30:23 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:23 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:30:23 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:23 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:23 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:23.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:23 compute-1 sudo[259628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:30:23 compute-1 sudo[259628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:30:23 compute-1 sudo[259628]: pam_unix(sudo:session): session closed for user root
Oct 10 10:30:24 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:24 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:24 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:24.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:24 compute-1 nova_compute[235132]: 2025-10-10 10:30:24.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:24 compute-1 ceph-mon[79167]: pgmap v1337: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:30:25 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:25 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:30:25 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:25.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:30:26 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:26 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:26 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:26.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:26 compute-1 nova_compute[235132]: 2025-10-10 10:30:26.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:26 compute-1 ceph-mon[79167]: pgmap v1338: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:26 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/1496174560' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 10 10:30:26 compute-1 ceph-mon[79167]: from='client.? 192.168.122.10:0/1496174560' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 10 10:30:27 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:27 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:27 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:27.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:27 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:30:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:30:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:30:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:27 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:30:28 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:28 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:30:28 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:28 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:30:28 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:28.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:30:28 compute-1 ceph-mon[79167]: pgmap v1339: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:30:29 compute-1 nova_compute[235132]: 2025-10-10 10:30:29.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:29 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:29 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:29 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:29.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:30 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:30 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:30 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:30.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:30 compute-1 ceph-mon[79167]: pgmap v1340: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:31 compute-1 nova_compute[235132]: 2025-10-10 10:30:31.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:31 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:31 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:30:31 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:31.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:30:31 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:30:32 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:32 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:30:32 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:32.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:30:32 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:30:32 compute-1 ceph-mon[79167]: pgmap v1341: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:30:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:30:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:32 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:30:33 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:33 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:30:33 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:33 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:33 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:33.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:33 compute-1 sshd-session[259658]: Accepted publickey for zuul from 192.168.122.10 port 50238 ssh2: ECDSA SHA256:OTD5B+ahDqExNS+mhJP5lz4CJKQqbHlXujfiLvlujac
Oct 10 10:30:33 compute-1 systemd-logind[789]: New session 60 of user zuul.
Oct 10 10:30:34 compute-1 systemd[1]: Started Session 60 of User zuul.
Oct 10 10:30:34 compute-1 sshd-session[259658]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 10 10:30:34 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:34 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:34 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:34.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:34 compute-1 nova_compute[235132]: 2025-10-10 10:30:34.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:34 compute-1 sudo[259662]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 10 10:30:34 compute-1 sudo[259662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 10 10:30:34 compute-1 ceph-mon[79167]: pgmap v1342: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:30:35 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:35 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:35 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:35.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:36 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:36 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:36 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:36.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:36 compute-1 nova_compute[235132]: 2025-10-10 10:30:36.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:36 compute-1 ceph-mon[79167]: pgmap v1343: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:37 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:37 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:30:37 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:37.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:30:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:30:37 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Oct 10 10:30:37 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3130338205' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 10 10:30:37 compute-1 ceph-mon[79167]: from='client.27688 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:37 compute-1 ceph-mon[79167]: from='client.26969 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:37 compute-1 ceph-mon[79167]: from='client.18153 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:37 compute-1 ceph-mon[79167]: from='client.27703 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:37 compute-1 ceph-mon[79167]: from='client.18159 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:37 compute-1 ceph-mon[79167]: from='client.26975 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:37 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1165768626' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 10 10:30:37 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/43274649' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 10 10:30:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:37 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:30:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:30:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:30:38 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:38 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:30:38 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:38 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:30:38 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:38.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:30:38 compute-1 ceph-mon[79167]: pgmap v1344: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:30:38 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3130338205' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 10 10:30:39 compute-1 nova_compute[235132]: 2025-10-10 10:30:39.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:39 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:39 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:39 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:39.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #76. Immutable memtables: 0.
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:30:39.669378) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 76
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092239669429, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1587, "num_deletes": 258, "total_data_size": 3888679, "memory_usage": 3937488, "flush_reason": "Manual Compaction"}
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #77: started
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092239681722, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 77, "file_size": 2541098, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39342, "largest_seqno": 40924, "table_properties": {"data_size": 2534363, "index_size": 3806, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14470, "raw_average_key_size": 20, "raw_value_size": 2520675, "raw_average_value_size": 3486, "num_data_blocks": 164, "num_entries": 723, "num_filter_entries": 723, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760092108, "oldest_key_time": 1760092108, "file_creation_time": 1760092239, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 12531 microseconds, and 6913 cpu microseconds.
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:30:39.681893) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #77: 2541098 bytes OK
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:30:39.681961) [db/memtable_list.cc:519] [default] Level-0 commit table #77 started
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:30:39.683481) [db/memtable_list.cc:722] [default] Level-0 commit table #77: memtable #1 done
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:30:39.683497) EVENT_LOG_v1 {"time_micros": 1760092239683492, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:30:39.683518) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 3881260, prev total WAL file size 3881260, number of live WAL files 2.
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000073.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:30:39.684605) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303036' seq:72057594037927935, type:22 .. '6C6F676D0031323630' seq:0, type:0; will stop at (end)
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [77(2481KB)], [75(12MB)]
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092239684638, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [77], "files_L6": [75], "score": -1, "input_data_size": 15157679, "oldest_snapshot_seqno": -1}
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #78: 6867 keys, 14996332 bytes, temperature: kUnknown
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092239757252, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 78, "file_size": 14996332, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14951013, "index_size": 27031, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17221, "raw_key_size": 180389, "raw_average_key_size": 26, "raw_value_size": 14827761, "raw_average_value_size": 2159, "num_data_blocks": 1069, "num_entries": 6867, "num_filter_entries": 6867, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760089522, "oldest_key_time": 0, "file_creation_time": 1760092239, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "72205880-e92d-427e-a84d-d60d79c79ead", "db_session_id": "7GCI9JJE38KWUSAORMRB", "orig_file_number": 78, "seqno_to_time_mapping": "N/A"}}
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:30:39.757672) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 14996332 bytes
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:30:39.758865) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 208.3 rd, 206.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 12.0 +0.0 blob) out(14.3 +0.0 blob), read-write-amplify(11.9) write-amplify(5.9) OK, records in: 7401, records dropped: 534 output_compression: NoCompression
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:30:39.758886) EVENT_LOG_v1 {"time_micros": 1760092239758877, "job": 46, "event": "compaction_finished", "compaction_time_micros": 72778, "compaction_time_cpu_micros": 30211, "output_level": 6, "num_output_files": 1, "total_output_size": 14996332, "num_input_records": 7401, "num_output_records": 6867, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092239759496, "job": 46, "event": "table_file_deletion", "file_number": 77}
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-1/store.db/000075.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760092239761913, "job": 46, "event": "table_file_deletion", "file_number": 75}
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:30:39.684548) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:30:39.761967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:30:39.761973) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:30:39.761974) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:30:39.761976) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:30:39 compute-1 ceph-mon[79167]: rocksdb: (Original Log Time 2025/10/10-10:30:39.761977) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 10 10:30:39 compute-1 podman[259958]: 2025-10-10 10:30:39.98015774 +0000 UTC m=+0.079341089 container health_status c6e6727876947575b20e0af07da07677490dbf4e8b418ce6e00c59bbf363ed51 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent)
Oct 10 10:30:40 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:40 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:30:40 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:40.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:30:40 compute-1 ceph-mon[79167]: pgmap v1345: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:41 compute-1 ovs-vsctl[260004]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 10 10:30:41 compute-1 nova_compute[235132]: 2025-10-10 10:30:41.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:41 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:41 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:41 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:41.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:42 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:42 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:42 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:42.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:30:42.231 141156 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 10 10:30:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:30:42.232 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 10 10:30:42 compute-1 ovn_metadata_agent[141151]: 2025-10-10 10:30:42.232 141156 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 10 10:30:42 compute-1 virtqemud[234629]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 10 10:30:42 compute-1 virtqemud[234629]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 10 10:30:42 compute-1 virtqemud[234629]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 10 10:30:42 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:30:42 compute-1 ceph-mon[79167]: pgmap v1346: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:42 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:30:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:30:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:30:43 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:43 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:30:43 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: cache status {prefix=cache status} (starting...)
Oct 10 10:30:43 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 10:30:43 compute-1 lvm[260297]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 10 10:30:43 compute-1 lvm[260297]: VG ceph_vg0 finished
Oct 10 10:30:43 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: client ls {prefix=client ls} (starting...)
Oct 10 10:30:43 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 10:30:43 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:43 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:30:43 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:43.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:30:43 compute-1 sudo[260431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 10 10:30:43 compute-1 sudo[260431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 10 10:30:43 compute-1 sudo[260431]: pam_unix(sudo:session): session closed for user root
Oct 10 10:30:43 compute-1 ceph-mon[79167]: from='client.27724 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:43 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/561687218' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 10 10:30:43 compute-1 ceph-mon[79167]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 10 10:30:43 compute-1 ceph-mon[79167]: from='client.27736 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:43 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1504914682' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:30:43 compute-1 ceph-mon[79167]: from='client.18186 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:43 compute-1 ceph-mon[79167]: from='client.27751 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:43 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/4271142430' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 10 10:30:43 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/171472224' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 10 10:30:43 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: damage ls {prefix=damage ls} (starting...)
Oct 10 10:30:43 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 10:30:43 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct 10 10:30:43 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/206671409' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 10 10:30:44 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:44 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:44 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:44.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:44 compute-1 nova_compute[235132]: 2025-10-10 10:30:44.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:44 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: dump loads {prefix=dump loads} (starting...)
Oct 10 10:30:44 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 10:30:44 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 10 10:30:44 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 10:30:44 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 10 10:30:44 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 10:30:44 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Oct 10 10:30:44 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2810672141' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:30:44 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 10 10:30:44 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 10:30:44 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 10 10:30:44 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 10:30:44 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config log"} v 0)
Oct 10 10:30:44 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3250802086' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 10 10:30:44 compute-1 ceph-mon[79167]: pgmap v1347: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:30:44 compute-1 ceph-mon[79167]: from='client.18204 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:44 compute-1 ceph-mon[79167]: from='client.27763 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:44 compute-1 ceph-mon[79167]: from='client.26996 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:44 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3593539214' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:30:44 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/206671409' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 10 10:30:44 compute-1 ceph-mon[79167]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 10 10:30:44 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2269363297' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 10 10:30:44 compute-1 ceph-mon[79167]: from='client.18222 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:44 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1064114185' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 10 10:30:44 compute-1 ceph-mon[79167]: from='client.27008 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:44 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/17616546' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 10 10:30:44 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2810672141' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 10 10:30:44 compute-1 ceph-mon[79167]: from='client.27808 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:44 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1752163504' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 10:30:44 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3250802086' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 10 10:30:44 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2558230964' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 10 10:30:44 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 10 10:30:44 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 10:30:45 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 10 10:30:45 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 10:30:45 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Oct 10 10:30:45 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2494599266' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 10 10:30:45 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: ops {prefix=ops} (starting...)
Oct 10 10:30:45 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 10:30:45 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:45 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:45 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:45.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:45 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Oct 10 10:30:45 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3277923492' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 10 10:30:45 compute-1 ceph-mon[79167]: from='client.18243 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:45 compute-1 ceph-mon[79167]: from='client.27029 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:45 compute-1 ceph-mon[79167]: from='client.27832 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:45 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/132749714' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 10 10:30:45 compute-1 ceph-mon[79167]: from='client.27044 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:45 compute-1 ceph-mon[79167]: from='client.27850 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:45 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/731166632' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 10:30:45 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2494599266' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 10 10:30:45 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/3206344806' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 10 10:30:45 compute-1 ceph-mon[79167]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 10 10:30:45 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/565812220' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 10:30:45 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3277923492' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 10 10:30:45 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2827576184' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 10:30:45 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2830212554' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 10 10:30:46 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct 10 10:30:46 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3172139523' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 10:30:46 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:46 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:46 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:46.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:46 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: session ls {prefix=session ls} (starting...)
Oct 10 10:30:46 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt Can't run that command on an inactive MDS!
Oct 10 10:30:46 compute-1 ceph-mds[84956]: mds.cephfs.compute-1.fhagzt asok_command: status {prefix=status} (starting...)
Oct 10 10:30:46 compute-1 nova_compute[235132]: 2025-10-10 10:30:46.473 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:46 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct 10 10:30:46 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/547263865' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 10:30:46 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct 10 10:30:46 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3113939320' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 10 10:30:46 compute-1 ceph-mon[79167]: from='client.18297 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:46 compute-1 ceph-mon[79167]: from='client.27077 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:46 compute-1 ceph-mon[79167]: pgmap v1348: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:46 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3111998569' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 10:30:46 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1189640479' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 10 10:30:46 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3172139523' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 10:30:46 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2107248748' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 10:30:46 compute-1 ceph-mon[79167]: from='client.27092 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:46 compute-1 ceph-mon[79167]: from='client.27901 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:46 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:30:46 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/110223178' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 10:30:46 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/3723288549' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 10 10:30:46 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3461548462' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 10 10:30:46 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/547263865' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 10:30:46 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3113939320' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 10 10:30:46 compute-1 ceph-mon[79167]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 10 10:30:46 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2575731666' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 10 10:30:46 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct 10 10:30:46 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2912853044' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 10:30:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Oct 10 10:30:47 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3051349397' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 10 10:30:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 10 10:30:47 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3858622949' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 10:30:47 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:47 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:47 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:47.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:30:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Oct 10 10:30:47 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/683696595' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 10 10:30:47 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3006396374' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 10:30:47 compute-1 ceph-mon[79167]: from='client.18363 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:47 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1715209047' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 10:30:47 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2912853044' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 10:30:47 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3051349397' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 10 10:30:47 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/910388127' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 10 10:30:47 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3498770159' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 10 10:30:47 compute-1 ceph-mon[79167]: from='client.27949 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:47 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2040415731' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 10 10:30:47 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3858622949' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 10:30:47 compute-1 ceph-mon[79167]: from='client.27140 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:47 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/4142265369' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 10:30:47 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1724959652' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 10:30:47 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/683696595' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 10 10:30:47 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2898906544' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 10 10:30:47 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Oct 10 10:30:47 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/149990470' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 10 10:30:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:30:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:30:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:47 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:30:48 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:48 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:30:48 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:48 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:48 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:48.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:48 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct 10 10:30:48 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1680992890' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 10:30:48 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Oct 10 10:30:48 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2469062832' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 10 10:30:48 compute-1 ceph-mon[79167]: from='client.27967 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:48 compute-1 ceph-mon[79167]: pgmap v1349: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:30:48 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1085433634' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 10:30:48 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/149990470' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 10 10:30:48 compute-1 ceph-mon[79167]: from='client.18420 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:48 compute-1 ceph-mon[79167]: from='client.27979 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:48 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1680992890' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 10:30:48 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1585167422' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 10:30:48 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2303646199' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 10:30:48 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2469062832' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 10 10:30:48 compute-1 ceph-mon[79167]: from='client.18441 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:48 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3231707591' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 10:30:48 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2013373529' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 10:30:48 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct 10 10:30:48 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3444631537' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 10:30:49 compute-1 nova_compute[235132]: 2025-10-10 10:30:49.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:49 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct 10 10:30:49 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/724274365' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 10:30:49 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:49 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:30:49 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:49.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:30:49 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct 10 10:30:49 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3730549091' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:19.280383+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:20.280524+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:21.280641+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:22.280798+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:23.280925+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:24.281094+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:25.281264+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:26.281386+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:27.281536+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:28.281703+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:29.281877+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:30.281988+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:31.282127+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:32.282301+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:33.282607+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:34.282802+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:35.283053+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:36.283204+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:37.283384+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:38.283588+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:39.283739+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:40.283883+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:41.283998+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:42.284168+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:43.284389+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:44.284936+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:45.285117+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:46.285282+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:47.285568+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:48.285695+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:49.286625+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:50.286740+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:51.286945+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:52.287416+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:53.287560+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:54.287691+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:55.287827+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:56.287947+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:57.288093+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:58.288243+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:58:59.288451+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:00.288582+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:01.288740+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:02.289043+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:03.289252+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:04.289459+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:05.289654+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:06.289775+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:07.289949+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:08.290102+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:09.290274+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:10.290499+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:11.290651+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:12.290820+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:13.290997+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:14.291156+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:15.291363+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:16.291500+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 3481600 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:17.291616+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:18.291752+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:19.291945+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:20.292082+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:21.292223+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:22.292393+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:23.292500+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:24.292616+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:25.292801+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:26.292975+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:27.293116+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:28.293235+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:29.293402+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:30.293542+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:31.293670+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:32.293837+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:33.293985+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:34.294119+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:35.294414+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:36.294560+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:37.294831+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:38.295000+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:39.295209+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:40.295491+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:41.295653+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:42.295796+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:43.295995+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:44.296136+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:45.296364+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 3473408 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b0963a4800 session 0x55b098fe1a40
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:46.296542+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 3465216 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:47.296699+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 3465216 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:48.296859+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 3465216 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:49.297291+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:50.297438+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:51.297598+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:52.297742+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:53.297881+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990185 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:54.298046+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:55.298152+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:56.298261+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096d95c00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 166.030792236s of 166.034805298s, submitted: 1
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b097aa5c00 session 0x55b0987bf4a0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026000 session 0x55b096d5eb40
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:57.298392+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:58.298559+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 990317 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T09:59:59.298799+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:00.298952+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:01.299096+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:02.299274+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:03.299431+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991829 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:04.299553+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:05.299715+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:06.299869+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:07.300717+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027400
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.242959023s of 11.250681877s, submitted: 2
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:08.300836+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991961 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:09.300980+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:10.301123+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:11.301368+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:12.301544+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:13.301722+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096657800
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993473 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:14.301899+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:15.302061+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:16.302200+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:17.302348+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:18.302536+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 993341 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:19.302922+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.104929924s of 12.169629097s, submitted: 3
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:20.303062+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:21.303694+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:22.303849+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:23.304371+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992618 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:24.304500+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:25.304643+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:26.305013+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:27.305755+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:28.305864+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096657800 session 0x55b099008780
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099027400 session 0x55b09900a1e0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992618 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:29.306208+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:30.306385+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:31.307012+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:32.307157+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:33.307310+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992618 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:34.307485+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026800 session 0x55b096c4b0e0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096dca000 session 0x55b097a14f00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:35.308308+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:36.308558+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:37.308703+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:38.308873+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992618 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:39.309076+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0963a4800
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.395177841s of 19.406061172s, submitted: 3
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:40.309286+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:41.309584+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:42.309806+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:43.309965+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:44.310124+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992750 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:45.310248+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b097aa5c00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:46.310398+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 3457024 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:47.310505+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:48.310641+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:49.310837+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992882 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:50.311005+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:51.311154+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:52.311384+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.400293350s of 13.415930748s, submitted: 2
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:53.311540+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:54.311681+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992750 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:55.311821+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:56.311986+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:57.312411+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:58.312566+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b0963a4800 session 0x55b09900ab40
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:00:59.312765+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992750 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:00.312913+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:01.313086+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:02.313404+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.019653320s of 10.023086548s, submitted: 1
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:03.313557+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:04.313759+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992618 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:05.313878+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:06.314009+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 3440640 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:07.314163+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:08.314304+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:09.314543+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992618 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:10.314765+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:11.314957+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:12.315099+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:13.315299+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:14.315548+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992750 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:15.315682+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026400
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.912096024s of 12.919400215s, submitted: 2
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:16.315817+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:17.315956+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:18.316076+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:19.316294+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994262 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:20.316434+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:21.316549+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:22.316724+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:23.316851+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82436096 unmapped: 3424256 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:24.317195+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994130 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 3416064 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:25.317342+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 3416064 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:26.317583+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 3416064 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:27.318469+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:28.318653+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:29.319435+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994130 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096dcbc00 session 0x55b09586e3c0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0963a4800
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:30.319766+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:31.319953+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:32.320223+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:33.320430+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:34.320578+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994130 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:35.320718+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:36.320876+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:37.321015+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:38.321190+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:39.321432+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994130 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:40.321669+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026400 session 0x55b09900b680
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026000 session 0x55b09900b0e0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:41.321817+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:42.322150+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:43.322371+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:44.322566+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994130 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:45.322700+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:46.322833+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 3399680 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:47.323021+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 3383296 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:48.323143+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 3383296 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:49.323301+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994130 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 3375104 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:50.323462+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 3375104 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:51.323636+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096dca000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.915779114s of 35.922908783s, submitted: 2
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3366912 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:52.323852+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3366912 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:53.324005+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3366912 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:54.324148+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 994262 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3366912 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:55.324533+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3366912 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:56.324678+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 3366912 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:57.324819+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026800
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:58.325047+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:01:59.325272+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995774 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:00.325434+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:01.325578+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:02.325749+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:03.325886+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.083123207s of 12.089940071s, submitted: 2
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:04.326023+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995183 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:05.326155+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:06.326373+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 3358720 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:07.326538+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82518016 unmapped: 3342336 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:08.326661+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82518016 unmapped: 3342336 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:09.327140+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995051 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82518016 unmapped: 3342336 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:10.327411+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 3334144 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:11.327581+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 3334144 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:12.327801+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 3334144 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:13.327941+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 3334144 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:14.328123+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995051 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 3334144 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:15.328247+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 3334144 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:16.328434+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:17.328594+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:18.328795+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:19.328955+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995051 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:20.329167+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:21.329397+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:22.329526+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:23.329660+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:24.329847+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995051 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:25.329987+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:26.330189+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 3325952 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:27.330373+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:28.330599+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:29.330814+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995051 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:30.330963+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:31.331179+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:32.331347+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:33.331504+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:34.331693+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b097aa5c00 session 0x55b0990090e0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995051 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:35.331847+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:36.332002+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:37.332221+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:38.332387+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:39.332702+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995051 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:40.332858+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:41.332996+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:42.333274+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:43.333432+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:44.333601+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995051 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:45.333734+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 3309568 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027400
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 41.845153809s of 41.853366852s, submitted: 2
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:46.334056+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 3301376 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:47.334229+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 3301376 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b098269000 session 0x55b097a15e00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099068000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:48.334380+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:49.334639+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 995183 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:50.334818+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026800 session 0x55b0987bf4a0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096dca000 session 0x55b098e19a40
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:51.334963+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096dca000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:52.335144+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:53.335314+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:54.335535+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996695 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:55.335719+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:56.335904+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:57.336083+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:58.336250+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:02:59.336421+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996695 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:00.336613+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.253569603s of 15.263068199s, submitted: 2
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:01.336728+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b097aa5c00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:02.336821+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:03.337018+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:04.337186+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998207 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:05.337351+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:06.337533+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:07.337640+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82583552 unmapped: 3276800 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:08.337783+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 3260416 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:09.337926+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 998207 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:10.338106+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:11.338299+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:12.338431+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:13.338565+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:14.338726+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997616 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:15.338818+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:16.338976+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.775625229s of 15.790586472s, submitted: 4
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:17.339143+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:18.339295+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:19.339495+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997484 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:20.339667+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:21.339842+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:22.340026+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:23.340214+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:24.340352+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997484 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:25.340526+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099027000 session 0x55b0988cda40
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096d95c00 session 0x55b0987be5a0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:26.340666+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 3252224 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:27.340847+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3235840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:28.340941+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3235840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:29.341131+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3235840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997484 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:30.341297+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3235840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:31.341387+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3235840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:32.341532+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3235840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:33.341677+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3235840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:34.341847+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 3235840 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997484 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:35.342411+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:36.342826+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026800
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.357236862s of 20.361238480s, submitted: 1
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:37.342960+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:38.343129+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:39.344580+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997616 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026400
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:40.344741+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:41.344932+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:42.345097+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:43.345256+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:44.345581+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000640 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:45.345705+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:46.345846+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 3227648 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:47.346463+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 3211264 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:48.346633+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 3211264 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:49.346829+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 3211264 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000049 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:50.347030+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 3211264 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:51.347200+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.517070770s of 14.537599564s, submitted: 4
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3203072 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:52.347372+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3203072 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:53.347530+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3203072 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:54.347679+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3203072 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999917 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:55.347821+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 3203072 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:56.347958+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:57.348159+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:58.348344+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026400 session 0x55b09905d2c0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b097aa5c00 session 0x55b096bfb0e0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096dca000 session 0x55b098e285a0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099027400 session 0x55b098e29860
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:03:59.348524+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999917 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:00.348676+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:01.348825+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:02.348987+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:03.349118+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:04.349259+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 999917 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:05.349464+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:06.349627+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 3194880 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:07.350909+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 3178496 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:08.351096+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 3178496 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:09.351314+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096d95c00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.958301544s of 17.961801529s, submitted: 1
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b097aa5c00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 3162112 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000181 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:10.351459+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 3162112 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:11.351593+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 3162112 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:12.351738+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 3162112 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:13.351869+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 3162112 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:14.352009+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 3162112 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000181 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:15.352137+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026400
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 3153920 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread fragmentation_score=0.000030 took=0.000038s
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:16.352270+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 3153920 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:17.352432+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 3153920 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:18.353030+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 3153920 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:19.353357+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3145728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002614 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:20.353760+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3145728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:21.354045+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.060987473s of 12.085634232s, submitted: 5
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3145728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:22.354176+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3145728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:23.354311+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3145728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:24.354546+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3145728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 9078 writes, 35K keys, 9078 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 9078 writes, 2064 syncs, 4.40 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 776 writes, 1221 keys, 776 commit groups, 1.0 writes per commit group, ingest: 0.40 MB, 0.00 MB/s
                                           Interval WAL: 776 writes, 366 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac69b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55b094ac7350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002023 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:25.354703+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 3145728 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:26.354860+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 3137536 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:27.354993+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:28.355139+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:29.355382+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:30.355519+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:31.355690+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:32.355827+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:33.356021+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:34.356216+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:35.356394+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:36.356556+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:37.356719+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 3121152 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:38.356920+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3104768 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:39.357150+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3104768 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:40.357312+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3104768 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:41.357991+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3104768 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:42.358141+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82755584 unmapped: 3104768 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:43.358282+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 3096576 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:44.358429+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 3096576 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:45.358902+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 3096576 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:46.359011+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82763776 unmapped: 3096576 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:47.359145+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:48.359270+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:49.359385+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:50.359557+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:51.359755+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:52.359879+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:53.360063+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:54.360182+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:55.360399+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:56.360545+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:57.360760+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:58.360993+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:04:59.361313+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:00.361557+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:01.361946+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:02.362265+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:03.362457+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:04.362581+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:05.362755+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:06.362883+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82780160 unmapped: 3080192 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:07.363010+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 3063808 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:08.363123+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 3063808 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:09.363279+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 3063808 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:10.363434+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82796544 unmapped: 3063808 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:11.363546+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 3055616 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:12.363729+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 3055616 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:13.363908+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82804736 unmapped: 3055616 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:14.364230+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:15.364418+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:16.364635+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:17.364824+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:18.365014+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:19.365245+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:20.365544+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:21.365726+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:22.365959+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:23.366138+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:24.366434+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:25.366634+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:26.366848+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82812928 unmapped: 3047424 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:27.367055+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 3031040 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:28.367307+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 3031040 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:29.367677+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82829312 unmapped: 3031040 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:30.367929+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:31.368148+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:32.368464+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-mon[79167]: from='client.27179 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:33.368644+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-mon[79167]: from='client.27994 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-mon[79167]: from='client.18468 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:34.368811+0000)
Oct 10 10:30:49 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3444631537' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-mon[79167]: from='client.27191 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-mon[79167]: from='client.28015 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3721446131' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:35.369030+0000)
Oct 10 10:30:49 compute-1 ceph-mon[79167]: from='client.18486 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1092835265' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/724274365' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-mon[79167]: from='client.27209 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-mon[79167]: from='client.28033 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2000313604' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:36.369221+0000)
Oct 10 10:30:49 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/3657722192' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3730549091' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:37.369486+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:38.369726+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:39.370006+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:40.370266+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:41.370517+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:42.370739+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:43.370939+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:44.371209+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:45.371491+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:46.371704+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82837504 unmapped: 3022848 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:47.371902+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:48.372141+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099027000 session 0x55b0988dd680
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b097aa5c00 session 0x55b097a69e00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:49.372405+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:50.372605+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:51.372825+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:52.373028+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:53.373235+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:54.373505+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:55.373763+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001759 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:56.373997+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:57.374198+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:58.374458+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82853888 unmapped: 3006464 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:05:59.374761+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099069c00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 97.974418640s of 97.984451294s, submitted: 3
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:00.374965+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1001891 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:01.375136+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:02.375466+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:03.375762+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:04.376081+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:05.376229+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099069800
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006427 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:06.376414+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:07.376607+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:08.376835+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:09.377080+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:10.377381+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005836 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:11.377625+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.054930687s of 12.073850632s, submitted: 5
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:12.377822+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:13.378002+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:14.378234+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:15.378475+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005113 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:16.378715+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026400 session 0x55b0994fde00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096d95c00 session 0x55b09900ab40
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:17.378927+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:18.379577+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:19.380130+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:20.380558+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005113 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:21.380730+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82862080 unmapped: 2998272 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:22.381177+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 2990080 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:23.381488+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 2990080 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:24.381651+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 2990080 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:25.381872+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005113 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 2990080 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:26.382048+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 2990080 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:27.382238+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096d95c00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.182666779s of 16.189365387s, submitted: 2
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 2990080 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:28.382526+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82870272 unmapped: 2990080 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:29.382825+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:30.383045+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 2981888 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005245 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:31.383594+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 2981888 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:32.383870+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 2981888 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099069800 session 0x55b098fe4000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099069c00 session 0x55b098f794a0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:33.384286+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 2981888 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b097aa5c00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:34.384670+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 82878464 unmapped: 2981888 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:35.384802+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83189760 unmapped: 2670592 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005245 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:36.385008+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:37.385234+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:38.385420+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:39.385622+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.918289185s of 12.100981712s, submitted: 367
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:40.385817+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004063 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:41.386008+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:42.386182+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:43.386430+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026400
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:44.386616+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:45.386823+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004063 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:46.387018+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:47.387179+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026000 session 0x55b098fe1680
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026800 session 0x55b09905d0e0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:48.387423+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:49.387671+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:50.387895+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004063 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:51.388138+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:52.388403+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:53.388568+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:54.388810+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:55.389027+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004063 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:56.389246+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:57.389439+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:58.389628+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026400 session 0x55b09905c780
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026400
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.894775391s of 18.905117035s, submitted: 3
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:06:59.389834+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:00.390695+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1004063 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:01.390913+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:02.391153+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:03.391393+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:04.391588+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:05.391779+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005575 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:06.392090+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:07.392387+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:08.392685+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:09.392991+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026800
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.683311462s of 10.696245193s, submitted: 3
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:10.393283+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1005707 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:11.393584+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:12.393822+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:13.394092+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:14.394438+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:15.394745+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099069800
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008599 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:16.394963+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:17.395171+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:18.395304+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:19.395506+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:20.395697+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008599 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:21.395863+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.102847099s of 12.118186951s, submitted: 4
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:22.395999+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:23.396189+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:24.396383+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:25.396538+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:26.396679+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:27.396802+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:28.397007+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:29.397239+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:30.397418+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:31.397591+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:32.397750+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:33.397893+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:34.398087+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:35.398236+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:36.398395+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:37.398540+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:38.398699+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:39.398945+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:40.399119+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:41.399348+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:42.399483+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:43.399682+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:44.399912+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:45.400063+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:46.400289+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:47.400453+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:48.400641+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:49.400919+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:50.401153+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:51.401465+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:52.401717+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:53.401888+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:54.402065+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:55.402242+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:56.402407+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:57.402580+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:58.402725+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:07:59.402895+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:00.403071+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:01.403252+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:02.403397+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:03.403594+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:04.403833+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:05.404043+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:06.404201+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:07.404425+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:08.404665+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:09.404926+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:10.405128+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:11.405469+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:12.405639+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:13.405990+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:14.406183+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:15.406445+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:16.406637+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:17.406822+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:18.407053+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:19.407277+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:20.407440+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:21.407587+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:22.407762+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099069800 session 0x55b0986805a0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099026800 session 0x55b09840d2c0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:23.408010+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:24.408279+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:25.408531+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:26.408785+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:27.409002+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:28.409161+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:29.409379+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:30.409528+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007876 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:31.409714+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:32.409858+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:33.410038+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099069c00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 71.958114624s of 71.965682983s, submitted: 2
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:34.410224+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:35.410412+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008008 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:36.410606+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:37.410827+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:38.411013+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:39.411269+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:40.411489+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008008 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:41.411701+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:42.411925+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:43.412129+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:44.412296+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:45.412472+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.132945061s of 12.140886307s, submitted: 2
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006826 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:46.412663+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:47.412824+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:48.413004+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:49.413231+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:50.413364+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b097aa5c00 session 0x55b098f8a3c0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b096d95c00 session 0x55b0988dd680
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006694 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:51.413483+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:52.413696+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:53.413946+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:54.414207+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:55.414420+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006694 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:56.414581+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:57.414724+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:58.414833+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:08:59.415038+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:00.415182+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006694 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:01.415359+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027400
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.335838318s of 16.343191147s, submitted: 2
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:02.415528+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:03.415729+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:04.415902+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:05.416097+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006826 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:06.416310+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:07.416528+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 2473984 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099069400
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:08.416711+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 2465792 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:09.416883+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2514944 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:10.417046+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2514944 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:11.417191+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008338 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2514944 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:12.417439+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2514944 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:13.417595+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2514944 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:14.417745+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2514944 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:15.417882+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2514944 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:16.418029+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1008338 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 2514944 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:17.418303+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.721952438s of 15.729380608s, submitted: 2
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2498560 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:18.418575+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2498560 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:19.418767+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2498560 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:20.418904+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099069000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2498560 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099069400 session 0x55b098fe0f00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 ms_handle_reset con 0x55b099027400 session 0x55b098f9fc20
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _renew_subs
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:21.419061+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1011972 data_alloc: 218103808 data_used: 282624
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 144 heartbeat osd_stat(store_statfs(0x4fc5e8000/0x0/0x4ffc00000, data 0x167e70/0x224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 2498560 heap: 85860352 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:22.419261+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 18112512 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 145 ms_handle_reset con 0x55b099069000 session 0x55b0988aeb40
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:23.419403+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099069400
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _renew_subs
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 18096128 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:24.419576+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 86663168 unmapped: 15982592 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:25.419768+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 148 ms_handle_reset con 0x55b099069400 session 0x55b098f8be00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:26.419977+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080007 data_alloc: 218103808 data_used: 290816
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:27.420144+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd7000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:28.420348+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:29.420537+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:30.420753+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:31.420916+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080007 data_alloc: 218103808 data_used: 290816
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096d95c00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.096752167s of 14.256991386s, submitted: 46
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd7000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:32.421101+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:33.421291+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:34.421469+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:35.421667+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:36.421834+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080643 data_alloc: 218103808 data_used: 290816
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:37.422051+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b097aa5c00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:38.422188+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:39.422432+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:40.422636+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:41.422915+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080643 data_alloc: 218103808 data_used: 290816
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:42.423074+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:43.423389+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.083035469s of 12.092510223s, submitted: 2
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:44.423586+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:45.423809+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 17031168 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:46.424004+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080052 data_alloc: 218103808 data_used: 290816
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:47.424183+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:48.424389+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:49.424644+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:50.424853+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:51.425019+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079920 data_alloc: 218103808 data_used: 290816
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:52.425197+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:53.425429+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:54.425618+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:55.425813+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:56.425980+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079920 data_alloc: 218103808 data_used: 290816
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:57.426150+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:58.426296+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:09:59.426503+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:00.426661+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:01.426871+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1079920 data_alloc: 218103808 data_used: 290816
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:02.427127+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026800
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 148 ms_handle_reset con 0x55b099026800 session 0x55b09900bc20
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099069800
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 148 ms_handle_reset con 0x55b099069800 session 0x55b0990083c0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:03.427277+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026800
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 148 ms_handle_reset con 0x55b099026800 session 0x55b098e26000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 148 ms_handle_reset con 0x55b099026000 session 0x55b099433680
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 148 ms_handle_reset con 0x55b099026400 session 0x55b09722fa40
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 17014784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:04.427444+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027400
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 148 ms_handle_reset con 0x55b099027400 session 0x55b098856b40
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 93822976 unmapped: 8822784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:05.427570+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 148 heartbeat osd_stat(store_statfs(0x4fbdd8000/0x0/0x4ffc00000, data 0x972285/0xa34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099069000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 148 ms_handle_reset con 0x55b099069000 session 0x55b096c4a3c0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _renew_subs
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.802989960s of 21.816146851s, submitted: 2
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 93822976 unmapped: 8822784 heap: 102645760 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:06.427759+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1104862 data_alloc: 218103808 data_used: 7106560
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _renew_subs
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 150 ms_handle_reset con 0x55b099026000 session 0x55b098fe0960
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 94945280 unmapped: 11378688 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026400
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 150 ms_handle_reset con 0x55b099026400 session 0x55b0982841e0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:07.427932+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026800
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 150 ms_handle_reset con 0x55b099026800 session 0x55b098e19860
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027400
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 150 ms_handle_reset con 0x55b099027400 session 0x55b0988cd860
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099069000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 150 ms_handle_reset con 0x55b099069000 session 0x55b098f8b2c0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fbdd3000/0x0/0x4ffc00000, data 0x974379/0xa38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95068160 unmapped: 11255808 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:08.428130+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 150 heartbeat osd_stat(store_statfs(0x4fb81c000/0x0/0x4ffc00000, data 0xf294c4/0xfee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10387456 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:09.428429+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10387456 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:10.428598+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10387456 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:11.428803+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1155058 data_alloc: 218103808 data_used: 7106560
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10387456 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:12.428986+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb81a000/0x0/0x4ffc00000, data 0xf2b496/0xff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10387456 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:13.429165+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10387456 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:14.429439+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026400
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b096e1c960
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10387456 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb81a000/0x0/0x4ffc00000, data 0xf2b496/0xff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:15.429566+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026800
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027400
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 95936512 unmapped: 10387456 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:16.429699+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1156283 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 96575488 unmapped: 9748480 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:17.429849+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:18.429989+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:19.430135+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:20.430244+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb81b000/0x0/0x4ffc00000, data 0xf2b496/0xff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:21.430418+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188963 data_alloc: 218103808 data_used: 7876608
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:22.430606+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:23.430776+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.784969330s of 17.929061890s, submitted: 52
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:24.430903+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:25.431042+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb81b000/0x0/0x4ffc00000, data 0xf2b496/0xff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:26.431180+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb81b000/0x0/0x4ffc00000, data 0xf2b496/0xff1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x33ef9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1188372 data_alloc: 218103808 data_used: 7876608
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 97320960 unmapped: 9003008 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:27.431356+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 4276224 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:28.431526+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102105088 unmapped: 4218880 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:29.431736+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102121472 unmapped: 4202496 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:30.431886+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102121472 unmapped: 4202496 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:31.432074+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217534 data_alloc: 218103808 data_used: 8945664
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:32.432221+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:33.432401+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:34.432613+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:35.432772+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:36.432958+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217534 data_alloc: 218103808 data_used: 8945664
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:37.433176+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:38.433424+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:39.433632+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:40.433805+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:41.434246+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217686 data_alloc: 218103808 data_used: 8949760
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:42.434499+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:43.434686+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:44.434902+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:45.435088+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:46.435398+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102154240 unmapped: 4169728 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217838 data_alloc: 218103808 data_used: 8953856
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:47.435586+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102187008 unmapped: 4136960 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:48.435822+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102187008 unmapped: 4136960 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:49.436047+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 102187008 unmapped: 4136960 heap: 106323968 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099069400
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099069400 session 0x55b0991de960
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:50.436213+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104497152 unmapped: 2875392 heap: 107372544 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099068c00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.675640106s of 26.806079865s, submitted: 44
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099068c00 session 0x55b098fe1a40
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099068800
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099068800 session 0x55b0988ae5a0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099068400
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099068400 session 0x55b09722e960
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026400
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b096bfb860
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099068800
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099068800 session 0x55b099432960
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa42f000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:51.436437+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293835 data_alloc: 218103808 data_used: 8970240
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:52.436588+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b098fe0b40
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099069c00 session 0x55b098e28960
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:53.436835+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97fa000/0x0/0x4ffc00000, data 0x1dac496/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:54.437040+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:55.437219+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:56.437401+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293835 data_alloc: 218103808 data_used: 8970240
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099068c00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099068c00 session 0x55b0988af680
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:57.437613+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:58.437806+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97fa000/0x0/0x4ffc00000, data 0x1dac496/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026400
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b099433860
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:10:59.438080+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105299968 unmapped: 17948672 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b0988cc000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099068800
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099068800 session 0x55b0988dcb40
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:00.438281+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 18071552 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97fa000/0x0/0x4ffc00000, data 0x1dac496/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099069c00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.327645302s of 10.456887245s, submitted: 32
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099069400
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:01.438473+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 18071552 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1295076 data_alloc: 218103808 data_used: 8974336
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:02.438652+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 18071552 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:03.438834+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105177088 unmapped: 18071552 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099082400
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97fa000/0x0/0x4ffc00000, data 0x1dac496/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:04.439000+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108740608 unmapped: 14508032 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:05.439903+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109838336 unmapped: 13410304 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97fa000/0x0/0x4ffc00000, data 0x1dac496/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:06.440641+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 13393920 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361328 data_alloc: 234881024 data_used: 16445440
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:07.441090+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 13393920 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:08.441610+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 13393920 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:09.442185+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 13393920 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099107400
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:10.442643+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 13393920 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97fa000/0x0/0x4ffc00000, data 0x1dac496/0x1e72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:11.443439+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 13393920 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364016 data_alloc: 234881024 data_used: 16445440
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:12.444054+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109854720 unmapped: 13393920 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:13.444799+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109838336 unmapped: 13410304 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.515455246s of 12.531072617s, submitted: 5
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:14.444984+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 8232960 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8bfa000/0x0/0x4ffc00000, data 0x29a6496/0x2a6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:15.445259+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 7266304 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:16.445700+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 7266304 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1475602 data_alloc: 234881024 data_used: 17408000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:17.445921+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115990528 unmapped: 7258112 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8bb4000/0x0/0x4ffc00000, data 0x29e3496/0x2aa9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:18.446112+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 116023296 unmapped: 7225344 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:19.446367+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 116023296 unmapped: 7225344 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:20.446547+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 116023296 unmapped: 7225344 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:21.446718+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 8200192 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1466414 data_alloc: 234881024 data_used: 17408000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8bc0000/0x0/0x4ffc00000, data 0x29e6496/0x2aac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:22.446886+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 8200192 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:23.447091+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115056640 unmapped: 8192000 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.086705208s of 10.319118500s, submitted: 125
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099069c00 session 0x55b0988afa40
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099069400 session 0x55b09638f0e0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026400
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:24.447249+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8bc0000/0x0/0x4ffc00000, data 0x29e6496/0x2aac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b098f9f680
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 16113664 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:25.447421+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 16113664 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:26.447587+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 16113664 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207454 data_alloc: 218103808 data_used: 5505024
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:27.447817+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 16113664 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:28.448016+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa430000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 16113664 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:29.448192+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 107134976 unmapped: 16113664 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026800 session 0x55b0990081e0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027400 session 0x55b09905c5a0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa430000/0x0/0x4ffc00000, data 0x1176496/0x123c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:30.448312+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105947136 unmapped: 17301504 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09840d4a0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:31.448511+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130516 data_alloc: 218103808 data_used: 3641344
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:32.448636+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:33.448853+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:34.449080+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:35.449268+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:36.449482+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130516 data_alloc: 218103808 data_used: 3641344
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:37.449692+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:38.449877+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:39.450104+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:40.450248+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:41.450407+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130516 data_alloc: 218103808 data_used: 3641344
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:42.450577+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:43.450751+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:44.450943+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:45.451173+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:46.451397+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130516 data_alloc: 218103808 data_used: 3641344
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:47.451609+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:48.451768+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:49.451984+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:50.452150+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:51.452350+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1130516 data_alloc: 218103808 data_used: 3641344
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:52.452519+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:53.452694+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:54.452854+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:55.452996+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fac2e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105963520 unmapped: 17285120 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099068800
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099068800 session 0x55b097a68d20
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026400
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b0987bfa40
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026800
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026800 session 0x55b0993723c0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09936c960
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027400
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 31.987216949s of 32.222537994s, submitted: 83
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:56.453167+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027400 session 0x55b096d5ed20
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099069c00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099069c00 session 0x55b098681860
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026400
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b098e292c0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026800
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026800 session 0x55b0970df2c0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b0970ded20
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144104 data_alloc: 218103808 data_used: 3641344
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:57.453386+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:58.453533+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:11:59.453769+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab87000/0x0/0x4ffc00000, data 0xa1e4a6/0xae5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab87000/0x0/0x4ffc00000, data 0xa1e4a6/0xae5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026000 session 0x55b0987bd2c0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:00.453959+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:01.454179+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1144104 data_alloc: 218103808 data_used: 3641344
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:02.454404+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:03.454560+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027400
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027400 session 0x55b09905d860
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:04.454701+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026000 session 0x55b09723cb40
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 103972864 unmapped: 19275776 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab87000/0x0/0x4ffc00000, data 0xa1e4a6/0xae5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026400
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b0987be5a0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026800
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026800 session 0x55b098f9e1e0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab87000/0x0/0x4ffc00000, data 0xa1e4a6/0xae5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:05.454840+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104292352 unmapped: 18956288 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:06.454960+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104300544 unmapped: 18948096 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153680 data_alloc: 218103808 data_used: 4112384
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:07.455082+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab62000/0x0/0x4ffc00000, data 0xa424c9/0xb0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104308736 unmapped: 18939904 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:08.455251+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab62000/0x0/0x4ffc00000, data 0xa424c9/0xb0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104308736 unmapped: 18939904 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:09.456245+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104308736 unmapped: 18939904 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:10.456737+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104308736 unmapped: 18939904 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6400
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.687290192s of 14.744665146s, submitted: 17
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:11.456927+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab62000/0x0/0x4ffc00000, data 0xa424c9/0xb0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104316928 unmapped: 18931712 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154724 data_alloc: 218103808 data_used: 4239360
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:12.457651+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104316928 unmapped: 18931712 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:13.458113+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104316928 unmapped: 18931712 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab62000/0x0/0x4ffc00000, data 0xa424c9/0xb0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:14.458506+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104316928 unmapped: 18931712 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:15.458710+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104316928 unmapped: 18931712 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:16.459069+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fab62000/0x0/0x4ffc00000, data 0xa424c9/0xb0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104316928 unmapped: 18931712 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1159212 data_alloc: 218103808 data_used: 4243456
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:17.459420+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 107143168 unmapped: 16105472 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:18.459603+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 107167744 unmapped: 16080896 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:19.459824+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108568576 unmapped: 14680064 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa4ce000/0x0/0x4ffc00000, data 0x10c04c9/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:20.460075+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106143744 unmapped: 17104896 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:21.460371+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106143744 unmapped: 17104896 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213196 data_alloc: 218103808 data_used: 4591616
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:22.460566+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106143744 unmapped: 17104896 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:23.460720+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106143744 unmapped: 17104896 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:24.460875+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.378688812s of 13.587653160s, submitted: 76
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106143744 unmapped: 17104896 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:25.461046+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa4e4000/0x0/0x4ffc00000, data 0x10c04c9/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106143744 unmapped: 17104896 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:26.461225+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 17096704 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213232 data_alloc: 218103808 data_used: 4591616
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa4e4000/0x0/0x4ffc00000, data 0x10c04c9/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:27.461443+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 17096704 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:28.461668+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106151936 unmapped: 17096704 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:29.461880+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:30.462062+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:31.462226+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa4e4000/0x0/0x4ffc00000, data 0x10c04c9/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213232 data_alloc: 218103808 data_used: 4591616
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:32.462416+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:33.462597+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:34.462768+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:35.462973+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa4e4000/0x0/0x4ffc00000, data 0x10c04c9/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b098680960
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6400 session 0x55b0994323c0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:36.463236+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213232 data_alloc: 218103808 data_used: 4591616
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:37.463450+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:38.463676+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:39.463912+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:40.464044+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:41.464243+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa4e4000/0x0/0x4ffc00000, data 0x10c04c9/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106160128 unmapped: 17088512 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213232 data_alloc: 218103808 data_used: 4591616
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:42.464434+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa4e4000/0x0/0x4ffc00000, data 0x10c04c9/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106168320 unmapped: 17080320 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:43.464637+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106168320 unmapped: 17080320 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:44.464830+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 106168320 unmapped: 17080320 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:45.465035+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b096d7fe00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.899662018s of 20.906446457s, submitted: 2
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6000 session 0x55b09723cd20
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa4e4000/0x0/0x4ffc00000, data 0x10c04c9/0x1188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104374272 unmapped: 18874368 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b098857a40
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:46.465256+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1143667 data_alloc: 218103808 data_used: 3641344
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:47.465501+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:48.465722+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:49.465964+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:50.466111+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:51.466434+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1143667 data_alloc: 218103808 data_used: 3641344
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:52.466587+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026400
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:53.466809+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:54.466964+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:55.467275+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.475157738s of 10.608925819s, submitted: 41
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:56.467574+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1142193 data_alloc: 218103808 data_used: 3641344
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:57.467810+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:58.467974+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:12:59.468204+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:00.468424+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:01.468588+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:02.468823+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141470 data_alloc: 218103808 data_used: 3641344
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:03.468975+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:04.469174+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:05.469402+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:06.469618+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:07.469790+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141470 data_alloc: 218103808 data_used: 3641344
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:08.470017+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099107400 session 0x55b096c4a5a0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099082400 session 0x55b0988dcf00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:09.470243+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:10.470395+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:11.470531+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104390656 unmapped: 18857984 heap: 123248640 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:12.470676+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141470 data_alloc: 218103808 data_used: 3641344
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099082400
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.499574661s of 16.514310837s, submitted: 4
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099082400 session 0x55b096c4af00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6000 session 0x55b096c4a3c0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b09936da40
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09936cf00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099107400
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099107400 session 0x55b0972781e0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104521728 unmapped: 19783680 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:13.470936+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104521728 unmapped: 19783680 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:14.475921+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104521728 unmapped: 19783680 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:15.476259+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47d000/0x0/0x4ffc00000, data 0xd19496/0xddf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104521728 unmapped: 19783680 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:16.477222+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104521728 unmapped: 19783680 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:17.477497+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1173157 data_alloc: 218103808 data_used: 3641344
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104538112 unmapped: 19767296 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6000 session 0x55b09874a780
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:18.478051+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104538112 unmapped: 19767296 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:19.478291+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd194b9/0xde0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104554496 unmapped: 19750912 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:20.478438+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 104636416 unmapped: 19668992 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd194b9/0xde0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:21.478616+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105037824 unmapped: 19267584 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:22.479802+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201011 data_alloc: 218103808 data_used: 7344128
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105037824 unmapped: 19267584 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:23.480308+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105037824 unmapped: 19267584 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:24.480469+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd194b9/0xde0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105037824 unmapped: 19267584 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:25.480597+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd194b9/0xde0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105046016 unmapped: 19259392 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:26.480729+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105046016 unmapped: 19259392 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:27.480899+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201011 data_alloc: 218103808 data_used: 7344128
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105046016 unmapped: 19259392 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:28.481446+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105046016 unmapped: 19259392 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:29.482175+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd194b9/0xde0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105046016 unmapped: 19259392 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:30.482570+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47c000/0x0/0x4ffc00000, data 0xd194b9/0xde0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.522539139s of 18.680767059s, submitted: 28
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 105152512 unmapped: 19152896 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:31.482691+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 109240320 unmapped: 15065088 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:32.482826+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280927 data_alloc: 218103808 data_used: 7426048
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 16277504 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:33.483269+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99c6000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99c6000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 16277504 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:34.483550+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 16277504 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:35.483912+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 16277504 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:36.484206+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 16277504 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:37.484563+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99c6000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280795 data_alloc: 218103808 data_used: 7426048
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108027904 unmapped: 16277504 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:38.484730+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b097aa5c00 session 0x55b098fe4000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096d95c00 session 0x55b098f78f00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 16269312 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:39.485000+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 16269312 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:40.485204+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 16269312 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:41.485426+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99c6000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108036096 unmapped: 16269312 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:42.485704+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280811 data_alloc: 218103808 data_used: 7426048
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 16261120 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:43.485913+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 16261120 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:44.486120+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 16261120 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:45.486378+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99c6000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99c6000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 16261120 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:46.486627+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108044288 unmapped: 16261120 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:47.486769+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280811 data_alloc: 218103808 data_used: 7426048
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99c6000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b098e28000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7400
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7400 session 0x55b09936de00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096d95c00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096d95c00 session 0x55b097278d20
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 108060672 unmapped: 16244736 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:48.487052+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b097aa5c00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b097aa5c00 session 0x55b09936d4a0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112525312 unmapped: 11780096 heap: 124305408 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:49.487311+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6000 session 0x55b099432d20
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.515605927s of 18.667829514s, submitted: 60
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b0991dfc20
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7800
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7c00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7800 session 0x55b098e281e0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99c6000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096d95c00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096d95c00 session 0x55b097a68d20
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b097aa5c00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b097aa5c00 session 0x55b0994325a0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6000 session 0x55b096ddd4a0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 19636224 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:50.487508+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 19636224 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:51.487673+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f90d1000/0x0/0x4ffc00000, data 0x20c252b/0x218b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:52.487840+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 19636224 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361043 data_alloc: 234881024 data_used: 10899456
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b0982843c0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:53.488315+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 19636224 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:54.488674+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 19636224 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a094000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094000 session 0x55b096d112c0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096d95c00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096d95c00 session 0x55b098284960
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b097aa5c00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b097aa5c00 session 0x55b096d7f2c0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:55.488869+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113057792 unmapped: 19644416 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:56.489024+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113057792 unmapped: 19644416 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:57.489151+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 116203520 unmapped: 16498688 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420911 data_alloc: 234881024 data_used: 19783680
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f90d1000/0x0/0x4ffc00000, data 0x20c252b/0x218b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f90d1000/0x0/0x4ffc00000, data 0x20c252b/0x218b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:58.489557+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:13:59.489800+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f90d1000/0x0/0x4ffc00000, data 0x20c252b/0x218b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:00.489987+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:01.490227+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f90d1000/0x0/0x4ffc00000, data 0x20c252b/0x218b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:02.490429+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1423343 data_alloc: 234881024 data_used: 20115456
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:03.490653+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:04.490870+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b098fe4b40
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026000 session 0x55b099433c20
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:05.491056+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 12410880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f90d1000/0x0/0x4ffc00000, data 0x20c252b/0x218b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:06.491244+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 12410880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.933946609s of 17.117507935s, submitted: 45
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:07.491391+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 9232384 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1454755 data_alloc: 234881024 data_used: 20537344
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:08.491538+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123224064 unmapped: 9478144 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:09.491745+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123224064 unmapped: 9478144 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:10.491891+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8d37000/0x0/0x4ffc00000, data 0x244652b/0x250f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123224064 unmapped: 9478144 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:11.492043+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123232256 unmapped: 9469952 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8d37000/0x0/0x4ffc00000, data 0x244652b/0x250f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:12.492206+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123232256 unmapped: 9469952 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464521 data_alloc: 234881024 data_used: 20365312
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:13.492432+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123265024 unmapped: 9437184 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:14.492615+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8d37000/0x0/0x4ffc00000, data 0x244652b/0x250f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123265024 unmapped: 9437184 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6000 session 0x55b099432d20
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b097a154a0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096d95c00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:15.492745+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096d95c00 session 0x55b0988cd0e0
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b097aa5c00
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:16.492898+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:17.493084+0000)
Oct 10 10:30:49 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:49 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:49 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296267 data_alloc: 234881024 data_used: 10899456
Oct 10 10:30:49 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f957d000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:49 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:18.493301+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:19.496084+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:20.496502+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f957d000/0x0/0x4ffc00000, data 0x17cf4b9/0x1896000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:21.498072+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:22.498315+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296267 data_alloc: 234881024 data_used: 10899456
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.756012917s of 16.126758575s, submitted: 124
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b0987be1e0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09586ef00
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:23.498473+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114778112 unmapped: 17924096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026000 session 0x55b09638e3c0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x9784b9/0xa3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:24.498768+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 21069824 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 11K writes, 41K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 3034 syncs, 3.72 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2207 writes, 6322 keys, 2207 commit groups, 1.0 writes per commit group, ingest: 6.08 MB, 0.01 MB/s
                                           Interval WAL: 2207 writes, 970 syncs, 2.28 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:25.498911+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 21061632 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:26.499090+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 21061632 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:27.499225+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 21061632 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166769 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:28.499394+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 21061632 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:29.499658+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 21061632 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:30.499863+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 21061632 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:31.500034+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111640576 unmapped: 21061632 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:32.500207+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 21053440 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166637 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:33.500386+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 21053440 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:34.500623+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111648768 unmapped: 21053440 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:35.500837+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 21045248 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:36.501102+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111656960 unmapped: 21045248 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:37.501291+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 21037056 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166637 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:38.501534+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 21037056 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:39.502124+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 21037056 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:40.502515+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 21037056 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:41.502720+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 21037056 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:42.503131+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 21037056 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166637 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:43.503452+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 21037056 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:44.503697+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111665152 unmapped: 21037056 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:45.503871+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 21028864 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:46.504052+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 21028864 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:47.504271+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 21028864 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1166637 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81d000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:48.504478+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 21028864 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:49.504749+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 21028864 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096d95c00
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096d95c00 session 0x55b096dddc20
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b097a15860
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b099008f00
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b099009a40
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099026400
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.991001129s of 27.170951843s, submitted: 56
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:50.504935+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099026400 session 0x55b097a15e00
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096d95c00
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096d95c00 session 0x55b096c4b860
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b096c4a5a0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b096c4ba40
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b096d7f2c0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 21020672 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:51.505437+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 21020672 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:52.505683+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 21020672 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205559 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:53.505914+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa313000/0x0/0x4ffc00000, data 0xe824a6/0xf49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 21020672 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:54.506224+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111681536 unmapped: 21020672 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa313000/0x0/0x4ffc00000, data 0xe824a6/0xf49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:55.506455+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 21012480 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:56.506665+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 21012480 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a094400
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094400 session 0x55b0994fde00
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:57.507488+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096d95c00
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111689728 unmapped: 21012480 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1205559 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:58.507646+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 20996096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:14:59.507801+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 111706112 unmapped: 20996096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa313000/0x0/0x4ffc00000, data 0xe824a6/0xf49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:00.507973+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112394240 unmapped: 20307968 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa313000/0x0/0x4ffc00000, data 0xe824a6/0xf49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:01.508299+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112394240 unmapped: 20307968 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:02.508570+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa313000/0x0/0x4ffc00000, data 0xe824a6/0xf49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112402432 unmapped: 20299776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1241127 data_alloc: 234881024 data_used: 12398592
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:03.508785+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112402432 unmapped: 20299776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:04.508995+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112402432 unmapped: 20299776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:05.509210+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112402432 unmapped: 20299776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa313000/0x0/0x4ffc00000, data 0xe824a6/0xf49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:06.509473+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa313000/0x0/0x4ffc00000, data 0xe824a6/0xf49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112402432 unmapped: 20299776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:07.509643+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112402432 unmapped: 20299776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1241127 data_alloc: 234881024 data_used: 12398592
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:08.509780+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 112402432 unmapped: 20299776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.066671371s of 19.101375580s, submitted: 6
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:09.509962+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117456896 unmapped: 15245312 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:10.510123+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117751808 unmapped: 14950400 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:11.510287+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117932032 unmapped: 14770176 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:12.510448+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117940224 unmapped: 14761984 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310445 data_alloc: 234881024 data_used: 13742080
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:13.510638+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117940224 unmapped: 14761984 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:14.510849+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 14753792 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:15.511076+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 14753792 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:16.511309+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 14753792 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:17.511498+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 14753792 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310445 data_alloc: 234881024 data_used: 13742080
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:18.511687+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 14753792 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:19.511892+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 14745600 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:20.512079+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 14745600 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:21.512450+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 14745600 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:22.512614+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 14745600 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310445 data_alloc: 234881024 data_used: 13742080
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:23.512840+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 14745600 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:24.513040+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 14745600 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:25.513266+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:26.513445+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:27.513597+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310445 data_alloc: 234881024 data_used: 13742080
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:28.513736+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:29.513949+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:30.514130+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:31.514279+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:32.514434+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1310445 data_alloc: 234881024 data_used: 13742080
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:33.514661+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:34.514847+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:35.514981+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117964800 unmapped: 14737408 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.543247223s of 26.644886017s, submitted: 51
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b099009680
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b098f792c0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a094800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b098e29680
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a094c00
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094c00 session 0x55b09874fe00
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:36.515128+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a095000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a095000 session 0x55b096ddd680
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117538816 unmapped: 15163392 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99d9000/0x0/0x4ffc00000, data 0x17bc4a6/0x1883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:37.515382+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117538816 unmapped: 15163392 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332538 data_alloc: 234881024 data_used: 13742080
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:38.515578+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117538816 unmapped: 15163392 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:39.515830+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117538816 unmapped: 15163392 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:40.516039+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117538816 unmapped: 15163392 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b0994fc5a0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:41.516459+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a094800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117538816 unmapped: 15163392 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:42.516751+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99d9000/0x0/0x4ffc00000, data 0x17bc4a6/0x1883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117833728 unmapped: 14868480 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1339966 data_alloc: 234881024 data_used: 14827520
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:43.516927+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117833728 unmapped: 14868480 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:44.517111+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99d9000/0x0/0x4ffc00000, data 0x17bc4a6/0x1883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117850112 unmapped: 14852096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:45.517417+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99d9000/0x0/0x4ffc00000, data 0x17bc4a6/0x1883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117850112 unmapped: 14852096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:46.517587+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117850112 unmapped: 14852096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:47.517757+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117850112 unmapped: 14852096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1340574 data_alloc: 234881024 data_used: 14888960
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:48.517994+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99d9000/0x0/0x4ffc00000, data 0x17bc4a6/0x1883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117850112 unmapped: 14852096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:49.518262+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117850112 unmapped: 14852096 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:50.518458+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117800960 unmapped: 14901248 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:51.518622+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 117800960 unmapped: 14901248 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f99d9000/0x0/0x4ffc00000, data 0x17bc4a6/0x1883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:52.518807+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.697357178s of 16.766319275s, submitted: 23
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119865344 unmapped: 12836864 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1388676 data_alloc: 234881024 data_used: 15142912
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:53.518976+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121815040 unmapped: 10887168 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:54.519209+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 12115968 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:55.519413+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 12115968 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9345000/0x0/0x4ffc00000, data 0x1e4a4a6/0x1f11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:56.519629+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 12115968 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:57.519904+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 12115968 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1393116 data_alloc: 234881024 data_used: 14974976
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:58.520116+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 12115968 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:15:59.520379+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120586240 unmapped: 12115968 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:00.520546+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 12451840 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:01.520780+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9327000/0x0/0x4ffc00000, data 0x1e6e4a6/0x1f35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9327000/0x0/0x4ffc00000, data 0x1e6e4a6/0x1f35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 12451840 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:02.521007+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120250368 unmapped: 12451840 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392508 data_alloc: 234881024 data_used: 14974976
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:03.521165+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120258560 unmapped: 12443648 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:04.521395+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120258560 unmapped: 12443648 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:05.521521+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 12435456 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:06.521787+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 12435456 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:07.521966+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9327000/0x0/0x4ffc00000, data 0x1e6e4a6/0x1f35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 12435456 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392508 data_alloc: 234881024 data_used: 14974976
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:08.522165+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 12435456 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:09.522511+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9327000/0x0/0x4ffc00000, data 0x1e6e4a6/0x1f35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 12435456 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9327000/0x0/0x4ffc00000, data 0x1e6e4a6/0x1f35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:10.522694+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:11.522918+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9327000/0x0/0x4ffc00000, data 0x1e6e4a6/0x1f35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.087865829s of 19.344846725s, submitted: 102
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:12.523082+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392228 data_alloc: 234881024 data_used: 14974976
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:13.523255+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9324000/0x0/0x4ffc00000, data 0x1e714a6/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:14.523456+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:15.523642+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:16.523915+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9324000/0x0/0x4ffc00000, data 0x1e714a6/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:17.524096+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392228 data_alloc: 234881024 data_used: 14974976
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:18.524445+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:19.524720+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 12427264 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:20.524940+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:21.525128+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:22.525413+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9324000/0x0/0x4ffc00000, data 0x1e714a6/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392228 data_alloc: 234881024 data_used: 14974976
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:23.525615+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:24.525832+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.535310745s of 12.545021057s, submitted: 2
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:25.526011+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:26.526211+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 12419072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:27.526389+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9324000/0x0/0x4ffc00000, data 0x1e714a6/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 12410880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392396 data_alloc: 234881024 data_used: 14974976
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:28.526603+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 12410880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:29.526891+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9324000/0x0/0x4ffc00000, data 0x1e714a6/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0963a4800 session 0x55b099009860
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a095800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 12410880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:30.527084+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 12410880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:31.527230+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9324000/0x0/0x4ffc00000, data 0x1e714a6/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 12410880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:32.527479+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120291328 unmapped: 12410880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:33.527736+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1392396 data_alloc: 234881024 data_used: 14974976
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120356864 unmapped: 12345344 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:34.528006+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.840996742s of 10.006482124s, submitted: 55
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120422400 unmapped: 12279808 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:35.528137+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9324000/0x0/0x4ffc00000, data 0x1e714a6/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,1])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 12107776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:36.528472+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 12107776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:37.528681+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 12107776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:38.528870+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391892 data_alloc: 234881024 data_used: 14974976
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 12107776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:39.529212+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9324000/0x0/0x4ffc00000, data 0x1e714a6/0x1f38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 12107776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:40.529572+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120594432 unmapped: 12107776 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09905c3c0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b0988ddc20
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:41.529848+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a095c00
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a095c00 session 0x55b0970df4a0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 13443072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:42.530093+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 13443072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:43.530267+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315902 data_alloc: 234881024 data_used: 13803520
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119259136 unmapped: 13443072 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:44.530514+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9b4a000/0x0/0x4ffc00000, data 0x164b4a6/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 13434880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:45.530691+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 13434880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:46.530809+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119267328 unmapped: 13434880 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:47.531030+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119275520 unmapped: 13426688 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:48.531293+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315902 data_alloc: 234881024 data_used: 13803520
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096d95c00 session 0x55b0994fda40
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b096d7e5a0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.445354462s of 14.472743034s, submitted: 373
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:49.531534+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b096ddc3c0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:50.531674+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:51.531873+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:52.532110+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:53.532388+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181100 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:54.532585+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:55.532779+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:56.533090+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:57.533291+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115220480 unmapped: 17481728 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:58.533530+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181100 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115228672 unmapped: 17473536 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:16:59.533827+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115228672 unmapped: 17473536 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:00.534013+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 17465344 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:01.534218+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 17465344 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:02.534423+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 17465344 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:03.534599+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181100 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 17465344 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:04.534765+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 17465344 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:05.534948+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115236864 unmapped: 17465344 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:06.535158+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 17457152 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:07.535453+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 17457152 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:08.535804+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181100 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 17457152 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:09.535966+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 17457152 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:10.536144+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 17457152 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:11.536364+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 17457152 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:12.536633+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 17457152 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:13.536807+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1181100 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 17457152 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:14.536993+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115253248 unmapped: 17448960 heap: 132702208 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:15.537239+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b0987bc000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a094800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b098681860
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a095c00
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a095c00 session 0x55b098f794a0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b098f783c0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.050231934s of 27.103757858s, submitted: 15
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114810880 unmapped: 24707072 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b09638fe00
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:16.537383+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b0994332c0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a094800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b099433e00
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a4a8000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a4a8000 session 0x55b0994321e0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b09586ef00
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 24690688 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:17.537586+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9df6000/0x0/0x4ffc00000, data 0x139f4a6/0x1466000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 24690688 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:18.537844+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263995 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114827264 unmapped: 24690688 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:19.538098+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b0994fcd20
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a094800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 24387584 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:20.538296+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115130368 unmapped: 24387584 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:21.538472+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9dd2000/0x0/0x4ffc00000, data 0x13c34a6/0x148a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:22.538727+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 118431744 unmapped: 21086208 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:23.538922+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119930880 unmapped: 19587072 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1336483 data_alloc: 234881024 data_used: 17555456
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:24.539164+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119930880 unmapped: 19587072 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:25.539430+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119930880 unmapped: 19587072 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.006184578s of 10.114780426s, submitted: 27
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b096c4a3c0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b097a14b40
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:26.540138+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119922688 unmapped: 19595264 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9dd2000/0x0/0x4ffc00000, data 0x13c34a6/0x148a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a4a8400
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a4a8400 session 0x55b097a69860
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:27.540318+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:28.540716+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189775 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:29.541113+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:30.541442+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:31.541681+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:32.541903+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:33.545837+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189775 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:34.549097+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:35.550470+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:36.552975+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:37.553968+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:38.555754+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113164288 unmapped: 26353664 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189775 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b0982841e0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b096d5eb40
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b0988dcd20
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a094800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b09638f0e0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a4a8800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.877738953s of 12.966034889s, submitted: 31
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:39.556030+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a4a8800 session 0x55b098f8bc20
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b096e1da40
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b098fe41e0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113459200 unmapped: 26058752 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09874b2c0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a094800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b0990092c0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:40.556769+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113467392 unmapped: 26050560 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:41.557412+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 26042368 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47b000/0x0/0x4ffc00000, data 0xd19508/0xde1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:42.558082+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 26042368 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:43.558353+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 26042368 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223186 data_alloc: 218103808 data_used: 7118848
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a4a8c00
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a4a8c00 session 0x55b09723de00
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:44.560183+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113475584 unmapped: 26042368 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b098857e00
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa47b000/0x0/0x4ffc00000, data 0xd19508/0xde1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b09638f4a0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:45.560691+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09936de00
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113500160 unmapped: 26017792 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a094800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a4a9000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa479000/0x0/0x4ffc00000, data 0xd1953b/0xde3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:46.560891+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113500160 unmapped: 26017792 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: mgrc ms_handle_reset ms_handle_reset con 0x55b096656000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/194506248
Oct 10 10:30:50 compute-1 ceph-osd[76867]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/194506248,v1:192.168.122.100:6801/194506248]
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: get_auth_request con 0x55b09a4a8c00 auth_method 0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: mgrc handle_mgr_configure stats_period=5
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:47.561184+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113582080 unmapped: 25935872 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b096dcac00 session 0x55b0988cc5a0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096dcb000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099068000 session 0x55b0991deb40
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b096dcac00
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:48.561428+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113582080 unmapped: 25935872 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237429 data_alloc: 218103808 data_used: 8724480
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:49.562116+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 25427968 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa479000/0x0/0x4ffc00000, data 0xd1953b/0xde3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:50.562405+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 25427968 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:51.562896+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 25419776 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:52.563070+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114098176 unmapped: 25419776 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b097a15680
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a4a9000 session 0x55b096e1cd20
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.847922325s of 13.942553520s, submitted: 33
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:53.563222+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b097a15860
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa479000/0x0/0x4ffc00000, data 0xd1953b/0xde3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196900 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:54.563448+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:55.563612+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81c000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:56.563749+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:57.563951+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81c000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:58.564178+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196900 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:17:59.564439+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:00.564637+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:01.564815+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81c000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:02.565009+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:03.565254+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81c000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196900 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:04.565437+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:05.565578+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:06.565765+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:07.565983+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81c000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:08.566252+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196900 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:09.566590+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:10.566782+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113328128 unmapped: 26189824 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:11.566965+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113336320 unmapped: 26181632 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81c000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:12.567141+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 26173440 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81c000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:13.567278+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 26173440 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1196900 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:14.567446+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 26173440 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:15.567641+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113344512 unmapped: 26173440 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b096c4af00
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b0988dd4a0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a094800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b098f8b2c0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a4a9800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a4a9800 session 0x55b097a68780
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.952396393s of 23.063278198s, submitted: 31
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa81b000/0x0/0x4ffc00000, data 0x9784bf/0xa3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:16.567843+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b0991df860
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b098e28000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09723cb40
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 26165248 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a094800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b0986803c0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a4a9c00
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a4a9c00 session 0x55b098680000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:17.568043+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 26165248 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:18.568245+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113352704 unmapped: 26165248 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239657 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa21d000/0x0/0x4ffc00000, data 0xf784f8/0x103f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:19.568508+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113360896 unmapped: 26157056 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:20.568670+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113360896 unmapped: 26157056 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:21.568862+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113369088 unmapped: 26148864 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa21d000/0x0/0x4ffc00000, data 0xf784f8/0x103f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:22.569095+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 26116096 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:23.569266+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 26116096 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1239657 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b098681860
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:24.569456+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b098680b40
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113401856 unmapped: 26116096 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa21d000/0x0/0x4ffc00000, data 0xf784f8/0x103f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b0988cc1e0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:25.569653+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a094800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa21d000/0x0/0x4ffc00000, data 0xf784f8/0x103f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b09874ab40
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 26107904 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09bade000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09bade400
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:26.569851+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 26099712 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:27.569953+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114032640 unmapped: 25485312 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:28.570121+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa21c000/0x0/0x4ffc00000, data 0xf78508/0x1040000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114622464 unmapped: 24895488 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283703 data_alloc: 234881024 data_used: 13406208
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:29.570306+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114630656 unmapped: 24887296 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:30.570515+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114630656 unmapped: 24887296 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:31.570724+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114630656 unmapped: 24887296 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:32.570892+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114630656 unmapped: 24887296 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:33.571077+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa21c000/0x0/0x4ffc00000, data 0xf78508/0x1040000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114630656 unmapped: 24887296 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283703 data_alloc: 234881024 data_used: 13406208
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa21c000/0x0/0x4ffc00000, data 0xf78508/0x1040000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:34.571230+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114630656 unmapped: 24887296 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:35.571391+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114638848 unmapped: 24879104 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:36.571578+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 114638848 unmapped: 24879104 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.799320221s of 20.981313705s, submitted: 25
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:37.571743+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 118800384 unmapped: 20717568 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa1a5000/0x0/0x4ffc00000, data 0xfef508/0x10b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:38.571902+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120127488 unmapped: 19390464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9ec9000/0x0/0x4ffc00000, data 0x12cb508/0x1393000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320379 data_alloc: 234881024 data_used: 13844480
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:39.572114+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120127488 unmapped: 19390464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:40.572429+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120127488 unmapped: 19390464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:41.572602+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120127488 unmapped: 19390464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:42.572802+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120135680 unmapped: 19382272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:43.572982+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120135680 unmapped: 19382272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320379 data_alloc: 234881024 data_used: 13844480
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9ea1000/0x0/0x4ffc00000, data 0x12f3508/0x13bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:44.573152+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120135680 unmapped: 19382272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9ea1000/0x0/0x4ffc00000, data 0x12f3508/0x13bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:45.573423+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 19374080 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:46.573642+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 19374080 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9e9f000/0x0/0x4ffc00000, data 0x12f5508/0x13bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:47.573897+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 19374080 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:48.574194+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 19374080 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1320655 data_alloc: 234881024 data_used: 13844480
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:49.574457+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120143872 unmapped: 19374080 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:50.574674+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9e9f000/0x0/0x4ffc00000, data 0x12f5508/0x13bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120152064 unmapped: 19365888 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:51.574885+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.374687195s of 14.491823196s, submitted: 54
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 20094976 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09bade000 session 0x55b09874a5a0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09bade400 session 0x55b0990094a0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:52.575037+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119422976 unmapped: 20094976 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b098f794a0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:53.575266+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204502 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:54.575487+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa485000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:55.575711+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:56.575984+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:57.576236+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:58.576472+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204502 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:18:59.576742+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:00.577021+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa485000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa485000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:01.577184+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:02.577439+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa485000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:03.577645+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204502 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:04.577816+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:05.578078+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:06.578285+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa485000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:07.578512+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:08.578713+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204502 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:09.578912+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:10.579080+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa485000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:11.579221+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:12.579440+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:13.579565+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204502 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:14.579716+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa485000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:15.579839+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:16.580008+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 24510464 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:17.580255+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.981967926s of 26.068605423s, submitted: 26
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b098284780
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 24502272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b098fe54a0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09a094800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09a094800 session 0x55b09723d2c0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b0970df680
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b098f8a3c0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:18.580406+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 24502272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223317 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:19.580582+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 24502272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa613000/0x0/0x4ffc00000, data 0xb824f8/0xc49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:20.580724+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 24502272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:21.580900+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa613000/0x0/0x4ffc00000, data 0xb824f8/0xc49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 24502272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b098f8b4a0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:22.581062+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09bade400
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09bade800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 24502272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:23.581208+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa613000/0x0/0x4ffc00000, data 0xb824f8/0xc49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 24502272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226965 data_alloc: 218103808 data_used: 7639040
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:24.581389+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 24502272 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:25.581563+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa613000/0x0/0x4ffc00000, data 0xb824f8/0xc49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 24469504 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:26.581736+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 24469504 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:27.581893+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 24469504 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:28.582047+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 24469504 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1237757 data_alloc: 218103808 data_used: 9252864
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:29.582297+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa613000/0x0/0x4ffc00000, data 0xb824f8/0xc49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 24436736 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:30.582488+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 24436736 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:31.582661+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 24436736 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:32.582839+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 115081216 unmapped: 24436736 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:33.583061+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.933888435s of 16.003017426s, submitted: 22
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119988224 unmapped: 19529728 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307679 data_alloc: 218103808 data_used: 9330688
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:34.583249+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9c31000/0x0/0x4ffc00000, data 0x15644f8/0x162b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 118054912 unmapped: 21463040 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:35.583437+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09badec00
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 118071296 unmapped: 21446656 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09badec00 session 0x55b098fe4780
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09badf000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09badf000 session 0x55b09936c3c0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09badf000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09badf000 session 0x55b09936d860
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b09638f4a0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:36.583623+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b09900ad20
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120176640 unmapped: 19341312 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:37.583923+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120176640 unmapped: 19341312 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f96d1000/0x0/0x4ffc00000, data 0x16b44f8/0x177b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:38.584151+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120176640 unmapped: 19341312 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1341734 data_alloc: 234881024 data_used: 10129408
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:39.584439+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120176640 unmapped: 19341312 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:40.584700+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120176640 unmapped: 19341312 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f96d1000/0x0/0x4ffc00000, data 0x16b44f8/0x177b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:41.584975+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120176640 unmapped: 19341312 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b098fe5a40
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:42.585113+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09badec00
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09badf400
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119898112 unmapped: 19619840 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:43.585287+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120004608 unmapped: 19513344 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1348526 data_alloc: 234881024 data_used: 10899456
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:44.585440+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:45.586462+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f96ad000/0x0/0x4ffc00000, data 0x16d84f8/0x179f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:46.587289+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:47.588140+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:48.588559+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1349742 data_alloc: 234881024 data_used: 11075584
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:49.589254+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:50.589613+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f96ad000/0x0/0x4ffc00000, data 0x16d84f8/0x179f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:51.590231+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:52.590427+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:53.590732+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 119914496 unmapped: 19603456 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1349742 data_alloc: 234881024 data_used: 11075584
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:54.590905+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.355663300s of 20.596637726s, submitted: 92
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122126336 unmapped: 17391616 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:55.591217+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122126336 unmapped: 17391616 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:56.591452+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f92d1000/0x0/0x4ffc00000, data 0x1aa64f8/0x1b6d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 16556032 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:57.591703+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 16556032 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:58.591861+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9246000/0x0/0x4ffc00000, data 0x1b374f8/0x1bfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 16556032 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1396772 data_alloc: 234881024 data_used: 12251136
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:19:59.592161+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9246000/0x0/0x4ffc00000, data 0x1b374f8/0x1bfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 16556032 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:00.592416+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 16556032 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:01.592640+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 16556032 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:02.592761+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f9246000/0x0/0x4ffc00000, data 0x1b374f8/0x1bfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:03.592954+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391276 data_alloc: 234881024 data_used: 12251136
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:04.593123+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:05.593262+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:06.593485+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:07.593778+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f922d000/0x0/0x4ffc00000, data 0x1b584f8/0x1c1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:08.593922+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391276 data_alloc: 234881024 data_used: 12251136
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:09.594241+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:10.594449+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:11.594764+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123125760 unmapped: 16392192 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:12.594918+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.976144791s of 18.175519943s, submitted: 91
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f922d000/0x0/0x4ffc00000, data 0x1b584f8/0x1c1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123183104 unmapped: 16334848 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:13.595196+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123183104 unmapped: 16334848 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1391364 data_alloc: 234881024 data_used: 12251136
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:14.595414+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09badec00 session 0x55b098fe41e0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09badf400 session 0x55b0988cd0e0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123166720 unmapped: 16351232 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:15.595526+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b0994fcf00
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97ee000/0x0/0x4ffc00000, data 0x15974f8/0x165e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120684544 unmapped: 18833408 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:16.595652+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120684544 unmapped: 18833408 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:17.595786+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120684544 unmapped: 18833408 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:18.595948+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120684544 unmapped: 18833408 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327065 data_alloc: 234881024 data_used: 10133504
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:19.596180+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09bade400 session 0x55b09874a3c0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09bade800 session 0x55b098e29c20
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120684544 unmapped: 18833408 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c7000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:20.596314+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c7000 session 0x55b096c4b860
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:21.596497+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa159000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:22.596671+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:23.596881+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221069 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:24.597056+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa159000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:25.597301+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:26.597552+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:27.597805+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa159000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:28.597999+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221069 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:29.598189+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:30.598447+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:31.598664+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa159000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:32.598852+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:33.599043+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221069 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:34.599166+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:35.599464+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa159000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:36.599753+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:37.600009+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:38.600493+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:39.600836+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221069 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:40.601031+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa159000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:41.601188+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:42.601443+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa159000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:43.601638+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fa159000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:44.601880+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1221069 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:45.602079+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 120700928 unmapped: 18817024 heap: 139517952 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:46.602276+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.100059509s of 34.283664703s, submitted: 59
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b09723cd20
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09bade400
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09bade400 session 0x55b09723c960
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09bade800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09bade800 session 0x55b096d7f2c0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09badf400
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09badf400 session 0x55b096d7e5a0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b096d7e3c0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 23461888 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:47.602491+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:48.602644+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 23461888 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:49.602859+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 23461888 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313977 data_alloc: 218103808 data_used: 7114752
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97f4000/0x0/0x4ffc00000, data 0x1592496/0x1658000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:50.603080+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 23461888 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:51.603207+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 23461888 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:52.603396+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 23461888 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97f4000/0x0/0x4ffc00000, data 0x1592496/0x1658000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b096d7e000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b099027000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b099027000 session 0x55b09874b2c0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:53.603575+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121307136 unmapped: 23461888 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09bade400
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09bade400 session 0x55b09874a5a0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09bade800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09bade800 session 0x55b0988cd860
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:54.603752+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121315328 unmapped: 23453696 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09badf400
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314282 data_alloc: 218103808 data_used: 7118848
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b09badf000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:55.603965+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 121339904 unmapped: 23429120 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97f4000/0x0/0x4ffc00000, data 0x1592496/0x1658000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:56.604092+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 19677184 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:57.604262+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:58.604471+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:20:59.604696+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97f4000/0x0/0x4ffc00000, data 0x1592496/0x1658000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1390566 data_alloc: 234881024 data_used: 18321408
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:00.604917+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:01.605138+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:02.605468+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:03.605673+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:04.605814+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1390566 data_alloc: 234881024 data_used: 18321408
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f97f4000/0x0/0x4ffc00000, data 0x1592496/0x1658000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:05.605979+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 18276352 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.678905487s of 18.792392731s, submitted: 24
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:06.606208+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 10878976 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:07.606385+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132464640 unmapped: 12304384 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8c37000/0x0/0x4ffc00000, data 0x2149496/0x220f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:08.606582+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132464640 unmapped: 12304384 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:09.606889+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132464640 unmapped: 12304384 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497028 data_alloc: 234881024 data_used: 19230720
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8c37000/0x0/0x4ffc00000, data 0x2149496/0x220f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:10.607084+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 12271616 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:11.607261+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 12271616 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8c37000/0x0/0x4ffc00000, data 0x2149496/0x220f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [0,1,1])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:12.607586+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 12271616 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:13.607764+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 12271616 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:14.607980+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 12271616 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1496100 data_alloc: 234881024 data_used: 19238912
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:15.608182+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 12271616 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4f8c19000/0x0/0x4ffc00000, data 0x216d496/0x2233000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:16.608461+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 12271616 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09badf400 session 0x55b09905c000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.617673874s of 10.932350159s, submitted: 136
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b09badf000 session 0x55b0994fde00
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: handle_auth_request added challenge on 0x55b0987c6800
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 ms_handle_reset con 0x55b0987c6800 session 0x55b098fe4000
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:17.608630+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:18.608809+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:19.609028+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:20.609176+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:21.609447+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:22.609608+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:23.609752+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:24.609975+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:25.610231+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:26.610413+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:27.610575+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:28.610748+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:29.610958+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:30.611174+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:31.611380+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:32.611517+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:33.611680+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:34.611918+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:35.612128+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:36.612287+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:37.612420+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:38.612557+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:39.612765+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:40.612926+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:41.613114+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:42.613234+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:43.613487+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:44.613692+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:45.613921+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:46.614113+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:47.614280+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:48.614480+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:49.614747+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:50.614854+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:51.615053+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:52.615248+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:53.615406+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:54.615611+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:55.615762+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:56.615960+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:57.616156+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:58.616420+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:21:59.616660+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:50 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:30:50 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:50.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:00.616838+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:01.617003+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:02.617183+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:03.617398+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:04.617591+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:05.617845+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:06.618025+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:07.618262+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:08.618462+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:09.618751+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:10.618933+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:11.619163+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:12.619388+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:13.619549+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:14.619710+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:15.619892+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:16.620043+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:17.620212+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:18.620401+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123961344 unmapped: 20807680 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:19.620569+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:20.620796+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:21.621010+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:22.621192+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:23.621366+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:24.621500+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:25.621643+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:26.621810+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:27.621949+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123969536 unmapped: 20799488 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:28.622079+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: do_command 'config diff' '{prefix=config diff}'
Oct 10 10:30:50 compute-1 ceph-osd[76867]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 20815872 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: do_command 'config show' '{prefix=config show}'
Oct 10 10:30:50 compute-1 ceph-osd[76867]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 10 10:30:50 compute-1 ceph-osd[76867]: do_command 'counter dump' '{prefix=counter dump}'
Oct 10 10:30:50 compute-1 ceph-osd[76867]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 10 10:30:50 compute-1 ceph-osd[76867]: do_command 'counter schema' '{prefix=counter schema}'
Oct 10 10:30:50 compute-1 ceph-osd[76867]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:29.622225+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123781120 unmapped: 20987904 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:30.622343+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123166720 unmapped: 21602304 heap: 144769024 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: do_command 'log dump' '{prefix=log dump}'
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:31.623364+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 123166720 unmapped: 32645120 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: do_command 'perf dump' '{prefix=perf dump}'
Oct 10 10:30:50 compute-1 ceph-osd[76867]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct 10 10:30:50 compute-1 ceph-osd[76867]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct 10 10:30:50 compute-1 ceph-osd[76867]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct 10 10:30:50 compute-1 ceph-osd[76867]: do_command 'perf schema' '{prefix=perf schema}'
Oct 10 10:30:50 compute-1 ceph-osd[76867]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:32.623498+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122822656 unmapped: 32989184 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:33.623617+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122822656 unmapped: 32989184 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:34.623770+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122822656 unmapped: 32989184 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:35.623897+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122822656 unmapped: 32989184 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:36.624045+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122822656 unmapped: 32989184 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:37.624201+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122822656 unmapped: 32989184 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:38.624346+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122822656 unmapped: 32989184 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:39.624485+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122822656 unmapped: 32989184 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:40.624608+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122830848 unmapped: 32980992 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:41.624726+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122830848 unmapped: 32980992 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:42.624836+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122830848 unmapped: 32980992 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:43.624964+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122830848 unmapped: 32980992 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:44.625111+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122830848 unmapped: 32980992 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:45.625232+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122830848 unmapped: 32980992 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:46.625378+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122830848 unmapped: 32980992 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:47.625493+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122830848 unmapped: 32980992 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:48.625630+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122830848 unmapped: 32980992 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:49.625774+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122830848 unmapped: 32980992 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:50.625893+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122830848 unmapped: 32980992 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:51.626083+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122830848 unmapped: 32980992 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:52.626219+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122830848 unmapped: 32980992 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:53.626428+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122830848 unmapped: 32980992 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:54.626600+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122839040 unmapped: 32972800 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:55.626771+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122839040 unmapped: 32972800 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:56.626931+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122839040 unmapped: 32972800 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:57.627086+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122839040 unmapped: 32972800 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:58.627231+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122839040 unmapped: 32972800 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:22:59.627384+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122839040 unmapped: 32972800 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:00.627556+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122839040 unmapped: 32972800 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:01.627689+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122839040 unmapped: 32972800 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:02.627861+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122839040 unmapped: 32972800 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:03.628057+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122839040 unmapped: 32972800 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:04.628248+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122839040 unmapped: 32972800 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:05.628450+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122839040 unmapped: 32972800 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:06.628700+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122847232 unmapped: 32964608 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:07.628875+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122847232 unmapped: 32964608 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:08.629107+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122847232 unmapped: 32964608 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:09.629396+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122847232 unmapped: 32964608 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:10.629567+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122847232 unmapped: 32964608 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:11.629727+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122847232 unmapped: 32964608 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:12.629892+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122847232 unmapped: 32964608 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:13.630054+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122847232 unmapped: 32964608 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:14.630262+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122847232 unmapped: 32964608 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:15.630465+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122847232 unmapped: 32964608 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:16.630647+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122847232 unmapped: 32964608 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:17.630761+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122855424 unmapped: 32956416 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:18.630944+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122855424 unmapped: 32956416 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:19.631168+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122855424 unmapped: 32956416 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:20.631302+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122855424 unmapped: 32956416 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:21.631533+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122855424 unmapped: 32956416 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:22.631690+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122855424 unmapped: 32956416 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:23.631872+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122855424 unmapped: 32956416 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:24.632027+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122855424 unmapped: 32956416 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:25.632167+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122855424 unmapped: 32956416 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:26.633242+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122855424 unmapped: 32956416 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:27.637674+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122855424 unmapped: 32956416 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:28.639653+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122855424 unmapped: 32956416 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:29.643210+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122855424 unmapped: 32956416 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:30.645838+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 32948224 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:31.648524+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 32948224 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:32.650585+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 32948224 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:33.650848+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 32948224 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:34.652283+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 32948224 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:35.653838+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 32948224 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:36.654853+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 32948224 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:37.656086+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 32948224 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:38.657191+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 32948224 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:39.657385+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 32948224 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:40.657991+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 32948224 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:41.658143+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 32948224 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:42.658772+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 32948224 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:43.659093+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 32948224 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:44.659579+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 32948224 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:45.660114+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 32948224 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:46.660592+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122871808 unmapped: 32940032 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:47.661031+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122871808 unmapped: 32940032 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:48.661181+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122871808 unmapped: 32940032 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:49.661408+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122871808 unmapped: 32940032 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:50.661600+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122871808 unmapped: 32940032 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:51.661773+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122871808 unmapped: 32940032 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:52.661972+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122871808 unmapped: 32940032 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:53.662199+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122871808 unmapped: 32940032 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:54.662349+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122871808 unmapped: 32940032 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:55.662668+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122871808 unmapped: 32940032 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:56.662938+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122871808 unmapped: 32940032 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:57.663191+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122871808 unmapped: 32940032 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:58.663419+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122871808 unmapped: 32940032 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:23:59.664160+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122871808 unmapped: 32940032 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:00.664878+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122871808 unmapped: 32940032 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:01.665537+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122871808 unmapped: 32940032 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:02.666123+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 32931840 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:03.666341+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 32931840 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:04.666804+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 32931840 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:05.667236+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 32931840 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:06.667655+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 32931840 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:07.668126+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 32931840 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:08.668373+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 32931840 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:09.668615+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 32931840 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:10.668860+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 32931840 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:11.669000+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 32931840 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:12.669259+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 32931840 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:13.669444+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 32931840 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:14.669582+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 32923648 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:15.669842+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 32923648 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:16.670097+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 32923648 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:17.670287+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 32923648 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:18.670470+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 32923648 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:19.670846+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 32923648 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:20.671074+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 32923648 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:21.671262+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 32923648 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:22.671553+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 32923648 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:23.671762+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 32923648 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:24.672010+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 13K writes, 48K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 13K writes, 4030 syncs, 3.37 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2312 writes, 7308 keys, 2312 commit groups, 1.0 writes per commit group, ingest: 7.72 MB, 0.01 MB/s
                                           Interval WAL: 2312 writes, 996 syncs, 2.32 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 32923648 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:25.672219+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 32923648 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:26.672434+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 32923648 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:27.672659+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 32923648 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:28.672864+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 32915456 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:29.673122+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 32915456 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:30.673301+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 32915456 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:31.673462+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 32915456 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:32.673672+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 32915456 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:33.673827+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 32915456 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:34.673987+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 32915456 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:35.674164+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 32915456 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:36.674427+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 32915456 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:37.674634+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:38.675674+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 32915456 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:39.676014+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 32915456 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:40.676318+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 32915456 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:41.676602+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 32915456 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:42.676907+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 32915456 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:43.677104+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:44.677272+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:45.677489+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:46.677783+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:47.678006+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:48.678303+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:49.678655+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:50.678876+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:51.679108+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:52.679303+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:53.679514+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:54.679674+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:55.679882+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:56.680035+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:57.680275+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:58.680515+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:24:59.680729+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:00.680895+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 32907264 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:01.681172+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 32899072 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:02.681420+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 32899072 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:03.681702+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 32899072 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:04.681930+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 32899072 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:05.682076+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 32899072 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:06.682258+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 32899072 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:07.682484+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 32899072 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:08.682673+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 32899072 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:09.682978+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 32890880 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:10.683119+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 32890880 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:11.683282+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 32890880 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:12.683396+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 32890880 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:13.683671+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 32890880 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:14.683830+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 32890880 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:15.683997+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 32890880 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:16.684124+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 32890880 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:17.684309+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 32890880 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:18.684591+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 32890880 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:19.684827+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 32890880 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:20.685047+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 32890880 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:21.685303+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 32890880 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:22.685523+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 32890880 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:23.685694+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 32882688 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:24.685954+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 32882688 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:25.686209+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 32882688 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:26.686430+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 32882688 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:27.686712+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 32882688 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:28.686960+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 32882688 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:29.687414+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 32882688 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:30.687740+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 32882688 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:31.687965+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 32882688 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:32.688207+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 32882688 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:33.688437+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 32882688 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:34.688613+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 32882688 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:35.688909+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 32882688 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:36.689108+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 32882688 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:37.689307+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 32882688 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:38.689568+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 32882688 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:39.689789+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 32882688 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:40.690031+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 32874496 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:41.690202+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 32874496 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:42.690412+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 32874496 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:43.690569+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 32874496 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:44.690769+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 32874496 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:45.690949+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 32874496 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:46.691178+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 32874496 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:47.691426+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 32874496 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:48.691647+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 32874496 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:49.691915+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 32874496 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:50.692093+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 32874496 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:51.692266+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 32874496 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:52.692461+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 32874496 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:53.692706+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 32874496 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:54.693055+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 32866304 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:55.693204+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 32866304 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:56.693380+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 32866304 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:57.693528+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 32866304 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:58.693704+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 32866304 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:25:59.693901+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 32866304 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:00.694108+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 32866304 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:01.694370+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 32866304 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:02.694562+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 32866304 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:03.694830+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 32866304 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:04.695042+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 32866304 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:05.695230+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 32858112 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:06.695435+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 32858112 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:07.695644+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 32858112 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:08.695856+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 32858112 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:09.696138+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 32858112 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:10.696489+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 32858112 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:11.696751+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 32858112 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:12.696950+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 32858112 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:13.697154+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 32858112 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:14.697395+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 32858112 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:15.697575+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 32858112 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:16.697764+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 32858112 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:17.698132+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 32858112 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:18.698409+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 32858112 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:19.698774+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 32849920 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:20.699005+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 32849920 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:21.699235+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 32849920 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:22.699479+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 32849920 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:23.699658+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 32849920 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:24.699936+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 32849920 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:25.700126+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 32849920 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:26.700428+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 32849920 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:27.700838+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 32849920 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:28.701035+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 32849920 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:29.701257+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 32849920 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:30.701554+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 32849920 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:31.701779+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 32849920 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:32.701969+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 32849920 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:33.702155+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122961920 unmapped: 32849920 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 317.334625244s of 317.442413330s, submitted: 37
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:34.702403+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 122994688 unmapped: 32817152 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:35.702608+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235682 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 124215296 unmapped: 31596544 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:36.702903+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:37.703166+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:38.703367+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:39.703626+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:40.703817+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:41.704039+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:42.704249+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:43.704438+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:44.704659+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:45.704858+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:46.705087+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:47.705471+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:48.705675+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:49.706002+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:50.706222+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:51.706501+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:52.706758+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:53.706933+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:54.707159+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:55.707485+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:56.707676+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:57.707930+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:58.708129+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:26:59.708372+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:00.708671+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:01.708873+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:02.709076+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:03.709258+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125304832 unmapped: 30507008 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:04.709462+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 30498816 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:05.709725+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 30498816 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:06.709970+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 30490624 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:07.710231+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 30490624 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:08.710435+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 30490624 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:09.710664+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 30490624 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:10.710871+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125329408 unmapped: 30482432 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:11.711084+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125329408 unmapped: 30482432 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:12.711249+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125329408 unmapped: 30482432 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:13.711435+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125329408 unmapped: 30482432 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:14.711642+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125329408 unmapped: 30482432 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:15.711818+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125329408 unmapped: 30482432 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:16.712029+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 30474240 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:17.712295+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 30474240 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:18.712569+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 30474240 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:19.712875+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 30474240 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:20.713121+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 30474240 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:21.713383+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 30474240 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:22.713580+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125337600 unmapped: 30474240 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:23.713827+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:24.714071+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:25.714294+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:26.714515+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:27.714742+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:28.714932+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:29.715172+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:30.715423+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:31.715610+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:32.715811+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:33.715982+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:34.716222+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:35.716421+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:36.716606+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:37.716801+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:38.717000+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:39.717295+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:40.717483+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:41.717648+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:42.717801+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:43.717991+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:44.718158+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:45.718394+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:46.718713+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:47.718900+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:48.719080+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:49.719284+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:50.719475+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:51.719643+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:52.719790+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:53.719975+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:54.720176+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:55.720388+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:56.720541+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:57.720728+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:58.720898+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:27:59.721160+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:00.721366+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:01.721525+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:02.721699+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:03.721917+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:04.722076+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:05.722273+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:06.722459+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:07.722656+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:08.722831+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:09.723034+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:10.723186+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:11.723463+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:12.723641+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:13.723810+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:14.724008+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:15.724205+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:16.724407+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:17.724701+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:18.724943+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:19.725201+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:20.725407+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:21.725646+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:22.725906+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:23.726150+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:24.726429+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:25.726689+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:26.726868+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:27.727108+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:28.727450+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:29.727720+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:30.727914+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:31.728141+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:32.728434+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:33.728617+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:34.728807+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:35.729185+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:36.729400+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:37.729638+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:38.729816+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:39.730036+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:40.730258+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:41.730535+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:42.730781+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:43.730998+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:44.731201+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:45.731414+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:46.731651+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:47.731856+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:48.732048+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:49.732306+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:50.732525+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:51.732842+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:52.733115+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:53.733439+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:54.733732+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:55.733914+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:56.734103+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:57.734308+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:58.734544+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:28:59.734886+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:00.735140+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:01.735420+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:02.735725+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:03.736062+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:04.736265+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:05.736486+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:06.736667+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:07.736943+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:08.737136+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:09.737362+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:10.737522+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:11.737735+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:12.737898+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:13.738173+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:14.738407+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:15.738605+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:16.738760+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125345792 unmapped: 30466048 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:17.738991+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125353984 unmapped: 30457856 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:18.739196+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125353984 unmapped: 30457856 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:19.739478+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125353984 unmapped: 30457856 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:20.739645+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125353984 unmapped: 30457856 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:21.739828+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125353984 unmapped: 30457856 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:22.739955+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125353984 unmapped: 30457856 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:23.740189+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125353984 unmapped: 30457856 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:24.740402+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125353984 unmapped: 30457856 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets getting new tickets!
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:25.740643+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _finish_auth 0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:25.742547+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125353984 unmapped: 30457856 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:26.740788+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125353984 unmapped: 30457856 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:27.741019+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125353984 unmapped: 30457856 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:28.741232+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125353984 unmapped: 30457856 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:29.741546+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125353984 unmapped: 30457856 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:30.741703+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:31.741854+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:32.742064+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:33.742269+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:34.742456+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:35.742623+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:36.742824+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:37.743005+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:38.743165+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:39.743860+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:40.744058+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:41.744248+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:42.744448+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:43.744614+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:44.744771+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:45.744910+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:46.745152+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:47.745351+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:48.745535+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125362176 unmapped: 30449664 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:49.745828+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125370368 unmapped: 30441472 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:50.746009+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125370368 unmapped: 30441472 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:51.746388+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125370368 unmapped: 30441472 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:52.746694+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125370368 unmapped: 30441472 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:53.746934+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125370368 unmapped: 30441472 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:54.747107+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125370368 unmapped: 30441472 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:55.747307+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125370368 unmapped: 30441472 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:56.747570+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125370368 unmapped: 30441472 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:57.747733+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125370368 unmapped: 30441472 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:58.747896+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125370368 unmapped: 30441472 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:29:59.748123+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125370368 unmapped: 30441472 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:00.748316+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125370368 unmapped: 30441472 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:01.748593+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125370368 unmapped: 30441472 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:02.748762+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125378560 unmapped: 30433280 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:03.748917+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125378560 unmapped: 30433280 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:04.749076+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125378560 unmapped: 30433280 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:05.749255+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125378560 unmapped: 30433280 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:06.749411+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125378560 unmapped: 30433280 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:07.749570+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125378560 unmapped: 30433280 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:08.749720+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125378560 unmapped: 30433280 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:09.750289+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125378560 unmapped: 30433280 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:10.750464+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125378560 unmapped: 30433280 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:11.750647+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125386752 unmapped: 30425088 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:12.750853+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125386752 unmapped: 30425088 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:13.751014+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125394944 unmapped: 30416896 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:14.751156+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125394944 unmapped: 30416896 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:15.751282+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: osd.1 151 heartbeat osd_stat(store_statfs(0x4fb42e000/0x0/0x4ffc00000, data 0x978496/0xa3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125394944 unmapped: 30416896 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:16.751375+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 10 10:30:50 compute-1 ceph-osd[76867]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 10 10:30:50 compute-1 ceph-osd[76867]: bluestore.MempoolThread _resize_shards cache_size: 2845415833 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1235466 data_alloc: 218103808 data_used: 7049216
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125411328 unmapped: 30400512 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: do_command 'config diff' '{prefix=config diff}'
Oct 10 10:30:50 compute-1 ceph-osd[76867]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 10 10:30:50 compute-1 ceph-osd[76867]: do_command 'config show' '{prefix=config show}'
Oct 10 10:30:50 compute-1 ceph-osd[76867]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 10 10:30:50 compute-1 ceph-osd[76867]: do_command 'counter dump' '{prefix=counter dump}'
Oct 10 10:30:50 compute-1 ceph-osd[76867]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 10 10:30:50 compute-1 ceph-osd[76867]: do_command 'counter schema' '{prefix=counter schema}'
Oct 10 10:30:50 compute-1 ceph-osd[76867]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:17.751521+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125190144 unmapped: 30621696 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:18.751690+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: prioritycache tune_memory target: 4294967296 mapped: 125583360 unmapped: 30228480 heap: 155811840 old mem: 2845415833 new mem: 2845415833
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: tick
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_tickets
Oct 10 10:30:50 compute-1 ceph-osd[76867]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-10T10:30:19.751856+0000)
Oct 10 10:30:50 compute-1 ceph-osd[76867]: do_command 'log dump' '{prefix=log dump}'
Oct 10 10:30:50 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 10 10:30:50 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3542325360' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 10:30:50 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct 10 10:30:50 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/838423744' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 10:30:51 compute-1 ceph-mon[79167]: from='client.18501 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:51 compute-1 ceph-mon[79167]: pgmap v1350: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:51 compute-1 ceph-mon[79167]: from='client.27227 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:51 compute-1 ceph-mon[79167]: from='client.28054 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:51 compute-1 ceph-mon[79167]: from='client.18519 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:51 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/598726768' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 10:30:51 compute-1 ceph-mon[79167]: from='client.27239 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:51 compute-1 ceph-mon[79167]: from='client.28075 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:51 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3542325360' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 10 10:30:51 compute-1 ceph-mon[79167]: from='client.18525 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:51 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3712051660' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 10 10:30:51 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2779081294' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 10 10:30:51 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/838423744' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 10 10:30:51 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Oct 10 10:30:51 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1286859206' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 10 10:30:51 compute-1 nova_compute[235132]: 2025-10-10 10:30:51.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:51 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:51 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:51 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:51.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:51 compute-1 crontab[261703]: (root) LIST (root)
Oct 10 10:30:51 compute-1 podman[261742]: 2025-10-10 10:30:51.995799332 +0000 UTC m=+0.084725037 container health_status 13266a1b088f1c2e30138b94d0d0a5cab9ba3c7f143de993da6c5dffe63b5143 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct 10 10:30:52 compute-1 podman[261744]: 2025-10-10 10:30:52.009930798 +0000 UTC m=+0.100857388 container health_status bbdb83d17df615de39d6b28b1464b880b2f14a6de58114688b263d70d4ed04c7 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 10 10:30:52 compute-1 podman[261743]: 2025-10-10 10:30:52.022126782 +0000 UTC m=+0.110549223 container health_status b54079627aa3fb5572c753ab7bd1ae474f5396bc6d8571da824337191885d6ad (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 10 10:30:52 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:52 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:52 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:52.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:52 compute-1 ceph-mon[79167]: from='client.27245 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:52 compute-1 ceph-mon[79167]: from='client.28099 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:52 compute-1 ceph-mon[79167]: from='client.18540 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:52 compute-1 ceph-mon[79167]: from='client.28117 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:52 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/3755513474' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 10 10:30:52 compute-1 ceph-mon[79167]: from='client.27266 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:52 compute-1 ceph-mon[79167]: from='client.18564 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:52 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1286859206' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 10 10:30:52 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2752456730' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 10 10:30:52 compute-1 ceph-mon[79167]: from='client.28132 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:52 compute-1 ceph-mon[79167]: from='client.27281 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:52 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/4144306905' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 10 10:30:52 compute-1 ceph-mon[79167]: from='client.18582 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:52 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/251813461' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 10 10:30:52 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/821557892' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 10 10:30:52 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1210927648' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 10 10:30:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Oct 10 10:30:52 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2364379109' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 10 10:30:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Oct 10 10:30:52 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2672443346' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 10 10:30:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:30:52 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Oct 10 10:30:52 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2579466127' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 10 10:30:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:30:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:30:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:52 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:30:53 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:53 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:30:53 compute-1 ceph-mon[79167]: pgmap v1351: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:53 compute-1 ceph-mon[79167]: from='client.27296 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:53 compute-1 ceph-mon[79167]: from='client.18603 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:53 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2364379109' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 10 10:30:53 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/3064887336' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 10 10:30:53 compute-1 ceph-mon[79167]: from='client.27308 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:53 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2197365172' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 10 10:30:53 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1586037795' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 10 10:30:53 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/865574371' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 10 10:30:53 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2672443346' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 10 10:30:53 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2319149536' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 10 10:30:53 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2221259386' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 10 10:30:53 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1413997151' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 10 10:30:53 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2285275171' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 10 10:30:53 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2579466127' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 10 10:30:53 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Oct 10 10:30:53 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1182933054' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 10 10:30:53 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:53 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:53 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:53.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:53 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Oct 10 10:30:53 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4229550321' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 10 10:30:53 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Oct 10 10:30:53 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1369088152' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 10 10:30:54 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Oct 10 10:30:54 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/29899441' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 10 10:30:54 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:54 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:54 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:54.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:54 compute-1 nova_compute[235132]: 2025-10-10 10:30:54.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:54 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Oct 10 10:30:54 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1625571292' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 10 10:30:54 compute-1 ceph-mon[79167]: from='client.27326 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:54 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1814715029' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 10 10:30:54 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1351759873' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 10 10:30:54 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/4094752344' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 10 10:30:54 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3898864091' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 10 10:30:54 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2844202342' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 10 10:30:54 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1182933054' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 10 10:30:54 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/112314830' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 10 10:30:54 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/4229550321' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 10 10:30:54 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1828719708' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 10 10:30:54 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/847432332' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 10 10:30:54 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/4280476709' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 10 10:30:54 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1369088152' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 10 10:30:54 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/853849376' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 10 10:30:54 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/29899441' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 10 10:30:54 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1052434041' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 10 10:30:54 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/827572701' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 10 10:30:54 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Oct 10 10:30:54 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/84867561' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 10 10:30:54 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Oct 10 10:30:54 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3675716813' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 10 10:30:54 compute-1 systemd[1]: Starting Hostname Service...
Oct 10 10:30:54 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Oct 10 10:30:54 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1162105813' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 10 10:30:54 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct 10 10:30:54 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2686277470' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 10 10:30:54 compute-1 systemd[1]: Started Hostname Service.
Oct 10 10:30:55 compute-1 ceph-mon[79167]: pgmap v1352: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:30:55 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1625571292' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 10 10:30:55 compute-1 ceph-mon[79167]: from='client.28288 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:55 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2382901546' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 10 10:30:55 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/84867561' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 10 10:30:55 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2698340361' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 10 10:30:55 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3675716813' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 10 10:30:55 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/109351117' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 10 10:30:55 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1162105813' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 10 10:30:55 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2686277470' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 10 10:30:55 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1820525157' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 10 10:30:55 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1466512300' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 10 10:30:55 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Oct 10 10:30:55 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3951488466' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 10 10:30:55 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Oct 10 10:30:55 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2884419151' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 10 10:30:55 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:55 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:30:55 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:55.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:30:55 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Oct 10 10:30:55 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1094418802' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 10 10:30:56 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:56 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:56 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:56.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:56 compute-1 ceph-mon[79167]: from='client.28312 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:56 compute-1 ceph-mon[79167]: from='client.28318 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:56 compute-1 ceph-mon[79167]: from='client.28336 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:56 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3951488466' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 10 10:30:56 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2884419151' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 10 10:30:56 compute-1 ceph-mon[79167]: from='client.18726 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:56 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3980240607' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 10 10:30:56 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/3888181591' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 10 10:30:56 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1094418802' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 10 10:30:56 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2207188954' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 10 10:30:56 compute-1 nova_compute[235132]: 2025-10-10 10:30:56.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Oct 10 10:30:57 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3022188802' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 10 10:30:57 compute-1 ceph-mon[79167]: from='client.28357 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:57 compute-1 ceph-mon[79167]: pgmap v1353: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:30:57 compute-1 ceph-mon[79167]: from='client.27446 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:57 compute-1 ceph-mon[79167]: from='client.18750 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:57 compute-1 ceph-mon[79167]: from='client.28372 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:57 compute-1 ceph-mon[79167]: from='client.18759 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:57 compute-1 ceph-mon[79167]: from='client.27458 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:57 compute-1 ceph-mon[79167]: from='client.27464 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:57 compute-1 ceph-mon[79167]: from='client.28384 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:57 compute-1 ceph-mon[79167]: from='client.18780 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:57 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/988730199' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 10:30:57 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/478578613' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 10 10:30:57 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1923095157' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 10 10:30:57 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:30:57 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:30:57 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:30:57 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:30:57 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:30:57 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:30:57 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:57 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:30:57 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:57.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:30:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:30:57 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "versions"} v 0)
Oct 10 10:30:57 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1478406306' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 10 10:30:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:57 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:30:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:30:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:30:58 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:30:58 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:30:58 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:58 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.001000027s ======
Oct 10 10:30:58 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:30:58.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Oct 10 10:30:58 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Oct 10 10:30:58 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3424359302' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 10:30:58 compute-1 ceph-mon[79167]: from='client.27470 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:58 compute-1 ceph-mon[79167]: from='client.18798 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:58 compute-1 ceph-mon[79167]: from='client.18807 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:58 compute-1 ceph-mon[79167]: from='client.27488 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:58 compute-1 ceph-mon[79167]: from='client.28420 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:58 compute-1 ceph-mon[79167]: from='client.18825 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:58 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3022188802' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 10 10:30:58 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3453675055' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 10 10:30:58 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:30:58 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:30:58 compute-1 ceph-mon[79167]: from='client.27506 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:58 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/477626751' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 10 10:30:58 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2053950729' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 10:30:58 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1478406306' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 10 10:30:58 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1622925211' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 10 10:30:58 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3424359302' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 10 10:30:58 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:30:58 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:30:58 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Oct 10 10:30:58 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1807734272' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 10 10:30:59 compute-1 nova_compute[235132]: 2025-10-10 10:30:59.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:30:59 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:30:59 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:30:59 compute-1 ceph-mon[79167]: from='client.18858 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:59 compute-1 ceph-mon[79167]: pgmap v1354: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 10 10:30:59 compute-1 ceph-mon[79167]: from='client.27527 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:59 compute-1 ceph-mon[79167]: from='client.18876 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:59 compute-1 ceph-mon[79167]: from='client.28471 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:30:59 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:30:59 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:30:59 compute-1 ceph-mon[79167]: from='client.27539 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:59 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:30:59 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:30:59 compute-1 ceph-mon[79167]: from='client.18897 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:30:59 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:30:59 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:30:59 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/102737578' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 10 10:30:59 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1807734272' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 10 10:30:59 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:30:59 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:30:59 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1691596010' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 10 10:30:59 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:30:59 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:30:59 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/4055382231' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 10 10:30:59 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 10 10:30:59 compute-1 ceph-mon[79167]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 10 10:30:59 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:30:59 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:30:59 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:30:59.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:30:59 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Oct 10 10:30:59 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1510796144' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 10 10:31:00 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:31:00 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:31:00 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:31:00.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:31:00 compute-1 ceph-mon[79167]: from='client.27557 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 10 10:31:00 compute-1 ceph-mon[79167]: from='client.18954 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:31:00 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1183362410' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 10 10:31:00 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1510796144' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 10 10:31:00 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/203338503' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 10 10:31:00 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/1997365182' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 10 10:31:00 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/525258780' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 10 10:31:00 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Oct 10 10:31:00 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1908662771' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 10 10:31:00 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "df"} v 0)
Oct 10 10:31:00 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2775988440' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 10 10:31:01 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Oct 10 10:31:01 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3838984059' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 10 10:31:01 compute-1 ceph-mon[79167]: pgmap v1355: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:31:01 compute-1 ceph-mon[79167]: from='client.27602 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:31:01 compute-1 ceph-mon[79167]: from='client.28552 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:31:01 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1908662771' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 10 10:31:01 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/1605429018' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 10 10:31:01 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/3635287387' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 10 10:31:01 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/2775988440' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 10 10:31:01 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/2780799251' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 10 10:31:01 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/2314951990' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 10 10:31:01 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3838984059' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 10 10:31:01 compute-1 nova_compute[235132]: 2025-10-10 10:31:01.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 10 10:31:01 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:31:01 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:31:01 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.102 - anonymous [10/Oct/2025:10:31:01.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:31:01 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Oct 10 10:31:01 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3786071962' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 10 10:31:02 compute-1 radosgw[84063]: ====== starting new request req=0x7f546efdb5d0 =====
Oct 10 10:31:02 compute-1 radosgw[84063]: ====== req done req=0x7f546efdb5d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 10 10:31:02 compute-1 radosgw[84063]: beast: 0x7f546efdb5d0: 192.168.122.100 - anonymous [10/Oct/2025:10:31:02.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 10 10:31:02 compute-1 ceph-mon[79167]: from='mgr.14709 192.168.122.100:0/3269626124' entity='mgr.compute-0.xkdepb' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 10 10:31:02 compute-1 ceph-mon[79167]: from='client.18996 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:31:02 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3786071962' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 10 10:31:02 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/685684035' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 10 10:31:02 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/444957168' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 10 10:31:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Oct 10 10:31:02 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3623178689' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 10 10:31:02 compute-1 ceph-mon[79167]: mon.compute-1@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct 10 10:31:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:31:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 10 10:31:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:31:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 10 10:31:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:31:02 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 10 10:31:03 compute-1 ceph-21f084a3-af34-5230-afe4-ea5cd24a55f4-nfs-cephfs-0-0-compute-1-mssvzx[241990]: 10/10/2025 10:31:03 : epoch 68e8dc7c : compute-1 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 10 10:31:03 compute-1 ceph-mon[79167]: mon.compute-1@2(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Oct 10 10:31:03 compute-1 ceph-mon[79167]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1826634635' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 10 10:31:03 compute-1 ceph-mon[79167]: from='client.28579 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:31:03 compute-1 ceph-mon[79167]: pgmap v1356: 353 pgs: 353 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 10 10:31:03 compute-1 ceph-mon[79167]: from='client.27644 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:31:03 compute-1 ceph-mon[79167]: from='client.28597 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 10 10:31:03 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/3622754536' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 10 10:31:03 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/3623178689' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 10 10:31:03 compute-1 ceph-mon[79167]: from='client.? 192.168.122.101:0/1826634635' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 10 10:31:03 compute-1 ceph-mon[79167]: from='client.? 192.168.122.102:0/4000583183' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 10 10:31:03 compute-1 ceph-mon[79167]: from='client.? 192.168.122.100:0/969988928' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
